Running S3cmd from PHP Script not working - amazon-s3

I want to use s3cmd from my PHP script. Everything is working from shell but same not working from my PHP script.
shell_exec('s3cmd --config=/root/s3cmd.conf ls');
this is not working, and then i gave full path to my s3cmd installation
shell_exec('/usr/sbin/s3cmd --config=/root/s3cmd.conf ls');
Not working in PHP script but the same is working in command,
The PHP file which is calling the shell_exec is in webroot.
Might be the problem that s3cmd is configured as a root user and i am running from PHP which is www-data. if this is the problem how can i create config file for www-data.
Help me what i am doing wrong.
Thanks
EDIT
I am using S3cmd. To run commands in my cron script. cron Script is a PHP script. The user running the cron is web11 and the s3cmd is configured using the root user.
so when i run s3cmd using shell_exec() in my PHP script it fails. But when i run in shell it works fine.
s3cmd ls
This works fine. as i am login using root user.
i tried to run it using runuser command
runuser -l root -c "s3cmd ls"
This works fine and displays list of buckets. But when i run using
runuser -l root -c "s3cmd ls"
This does not work. I tried by giving full path of the s3cmd
/usr/bin/s3cmd ls
this works in shell but not in my PHP script.
I changed permissions 777 for the php script and made root the owner of that user. but still does not work.
How can i run s3cmd from PHP script. ? i am on amazon Ec2 Instance.

Why not use the AWS SDK for PHP? You can have the same functionality using the S3Client.listObjects() method

shell_exec('s3cmd --config=/root/s3cmd.conf ls');
What might be missing here is the target of the S3 command "ls" ?!
It's working fine in PHP scripts like this :
exec("/usr/bin/s3cmd --config=.s3cfg info s3://YOUR-BUCKET/YOUR-FILE 2> /dev/null", $s3output, $s3return);
// switch due to return code (of shell !)
if($s3return == "0"){
echo "my file exists";
}
else {
echo "Error Code : " . $s3return;
}
This asumes, you ran "s3cmd --configure" successfully.

Related

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

Why is $PATH different when executing commands via SSH and libssh?

I'm trying to run a command on a remote host via libssh2 as wrapped by the ssh2 Rust crate.
So I would like to run the command cargo build, but when I try to run it via libssh, I get the error:
cargo: command not found
However, when I ssh into the server manually from the command line everything works fine.
I have noticed that the $PATH is different when running ssh from the command line and libssh as well:
for instance when I echo $PATH
ssh gives me:
/home/<user>/.cargo/bin:/usr/share/swift/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bi
while libssh gives me:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
So it looks like what's happening is that the modifications made to $PATH inside .bashrc and .profile are not making it in when running via libssh.
I also get the same behavior if I run /bin/bash -c "echo ${PATH}"
Why would this be the case, and is there any way to get the same behavior in both these cases?
Please take a look at that question.
TL;DR A login shell first reads /etc/profile and then ~/.bash_profile. A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc.

Running .sh scripts in Git Bash

I'm on a Windows machine using Git 2.7.2.windows.1 with MinGW 64.
I have a script in C:/path/to/scripts/myScript.sh.
How do I execute this script from my Git Bash instance?
It was possible to add it to the .bashrc file and then just execute the entire bashrc file.
But I want to add the script to a separate file and execute it from there.
Let's say you have a script script.sh. To run it (using Git Bash), you do the following: [a] Add a "sh-bang" line on the first line (e.g. #!/bin/bash) and then [b]:
# Use ./ (or any valid dir spec):
./script.sh
Note: chmod +x does nothing to a script's executability on Git Bash. It won't hurt to run it, but it won't accomplish anything either.
#!/usr/bin/env sh
this is how git bash knows a file is executable. chmod a+x does nothing in gitbash. (Note: any "she-bang" will work, e.g. #!/bin/bash, etc.)
If you wish to execute a script file from the git bash prompt on Windows, just precede the script file with sh
sh my_awesome_script.sh
if you are on Linux or ubuntu write ./file_name.sh
and you are on windows just write sh before file name like that sh file_name.sh
For Linux -> ./filename.sh
For Windows -> sh file_name.sh
If your running export command in your bash script the above-given solution may not export anything even if it will run the script. As an alternative for that, you can run your script using
. script.sh
Now if you try to echo your var it will be shown. Check my the result on my git bash
(coffeeapp) user (master *) capstone
$ . setup.sh
done
(coffeeapp) user (master *) capstone
$ echo $ALGORITHMS
[RS256]
(coffeeapp) user (master *) capstone
$
Check more detail in this question
I had a similar problem, but I was getting an error message
cannot execute binary file
I discovered that the filename contained non-ASCII characters. When those were fixed, the script ran fine with ./script.sh.
Once you're in the directory, just run it as ./myScript.sh
If by any chance you've changed the default open for .sh files to a text editor like I had, you can just "bash .\yourscript.sh", provided you have git bash installed and in path.
I was having two .sh scripts to start and stop the digital ocean servers that I wanted to run from the Windows 10. What I did is:
downloaded "Git for Windows" (from https://git-scm.com/download/win).
installed Git
to execute the .sh script just double-clicked the script file it started the execution of the script.
Now to run the script each time I just double-click the script
#!/bin/bash at the top of the file automatically makes the .sh file executable.
I agree the chmod does not do anything but the above line solves the problem.
you can either give the entire path in gitbash to execute it or add it in the PATH variable
export PATH=$PATH:/path/to/the/script
then you an run it from anywhere

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1

How do I call a local shell script from a web server?

I am running Ubuntu 11 and I would like to setup a simple webserver that responds to an http request by calling a local script with the GET or POST parameters. This script (already written) does some stuff and creates a file. This file should be made available at a URL, and the webserver should then make an http request to another server telling it to download the created file.
How would I go about setting this up? I'm not a total beginner with linux, but I wouldn't say I know it well either.
What webserver should I use? How do I give permission for the script to access local resources to create the file in question? I'm not too concerned with security or anything, this is for a personal experiment (I have control over all the computers involved). I've used apache before, but I've never set it up.
Any help would be appreciated..
This tutorial looks good, but it's a bit brief.
I have apache installed. If you don't: sudo apt-get install apache2.
cd /usr/lib/cgi-bin
# Make a file and let everyone execute it
sudo touch test.sh && chmod a+x test.sh
Then put the some code in the file. For example:
#!/bin/bash
# get today's date
OUTPUT="$(date)"
# You must add following two lines before
# outputting data to the web browser from shell
# script
echo "Content-type: text/html"
echo ""
echo "<html><head><title>Demo</title></head><body>"
echo "Today is $OUTPUT <br>"
echo "Current directory is $(pwd) <br>"
echo "Shell Script name is $0"
echo "</body></html>"
And finally open your browser and type http://localhost/cgi-bin/test.sh
If all goes well (as it did for me) you should see...
Today is Sun Dec 4 ...
Current directory is /usr/lib/cgi-bin Shell
Shell Script name is /usr/lib/cgi-bin/test.sh