How to check SSH command execute or not in PHP? - ssh

I am running ssh command using phpseclib. Here if condition showing wrong message. How to check ssh command execution in if condition. My code is:
$command = "sudo rm /path/filename";
if(!$ssh->exec($command)){
$response['success'] = false;
$response['messages'] = 'File delete Failed';
}else{
$response['success'] = true;
$response['messages'] = 'File deleted';
}
Here file deleting successfully. The message only shows wrong. Pls help me in this

Related

Error Running terminal command in Mac App

I'm trying to run terminal command pluginkit -m in my mac app but it returns message match: unauthorized discovery flag (PKDiscoverAll).But if I run ls command it works fine.
Can someone please explain how can I run pluginkit -m command or point me to documentation where apple explains why these command can't be run from an app. These commands run fine on my terminal.
Here is my code
let task = Process()
let pipe = Pipe()
task.standardOutput = pipe
task.standardError = pipe
task.arguments = ["-c", "pluginkit -m"]
task.executableURL = URL(fileURLWithPath: "/bin/zsh") //<--updated
task.standardInput = nil
try task.run()
let data = pipe.fileHandleForReading.readDataToEndOfFile()
let output = String(data: data, encoding: .utf8)!

How to store Terraform provisioner "local-exec" output in local variable and use variable value in "remote-exec"

I am working with Terraform provisionar. and in one scenario I need to execute a 'local-exec' provisionar and use the output [This is array of IP addesses] of the command into next 'remote-exec' provisionar.
And i am not able to store the 'local-exec' provisionar output in local variable to use later. I can store it in local file but not in intermediate variable
count = "${length(data.local_file.instance_ips.content)}"
this is not working.
resource "null_resource" "get-instance-ip-41" {
provisioner "local-exec" {
command = "${path.module}\\scripts\\findprivateip.bat > ${data.template_file.PrivateIpAddress.rendered}"
}
}
data "template_file" "PrivateIpAddress" {
template = "/output.log"
}
data "local_file" "instance_ips" {
filename = "${data.template_file.PrivateIpAddress.rendered}"
depends_on = ["null_resource.get-instance-ip-41"]
}
output "IP-address" {
value = "${data.local_file.instance_ips.content}"
}
# ---------------------------------------------------------------------------------------------------------------------
# Update the instnaces by installing newrelic agent using remote-exec
# ---------------------------------------------------------------------------------------------------------------------
resource "null_resource" "copy_file_newrelic_v_29" {
depends_on = ["null_resource.get-instance-ip-41"]
count = "${length(data.local_file.instance_ips.content)}"
triggers = {
cluster_instance_id = "${element(values(data.local_file.instance_ips.content[count.index]), 0)}"
}
provisioner "remote-exec" {
connection {
agent = "true"
bastion_host = "${aws_instance.bastion.*.public_ip}"
bastion_user = "ec2-user"
bastion_port = "22"
bastion_private_key = "${file("C:/keys/nvirginia-key-pair-ajoy.pem")}"
user = "ec2-user"
private_key = "${file("C:/keys/nvirginia-key-pair-ajoy.pem")}"
host = "${self.triggers.cluster_instance_id}"
}
inline = [
"echo 'license_key: 34adab374af99b1eaa148eb2a2fc2791faf70661' | sudo tee -a /etc/newrelic-infra.yml",
"sudo curl -o /etc/yum.repos.d/newrelic-infra.repo https://download.newrelic.com/infrastructure_agent/linux/yum/el/6/x86_64/newrelic-infra.repo",
"sudo yum -q makecache -y --disablerepo='*' --enablerepo='newrelic-infra'",
"sudo yum install newrelic-infra -y"
]
}
}
Unfortunately you can't. The solution I have found is to instead use an external data source block. You can run a command from there and retrieve the output(s), the only catch is that the command needs to produce json to standard output (stdout). See documentation here. I hope this is some help to others trying to solve this problem.

Error while executing the perl script and not able to set the Environment Variables

I executed the below perl script,
#!/usr/bin/perl
use strict;
use DBD::Oracle;
use DBI;
my $driver = "Oracle";
my $database = "host=xxxxxx;port=6210;sid=xxxx";
my $dsn = "DBI:$driver:$database";
my $userid = "xxxxx";
my $password = "xxxxx";
#Database Connection
my $dbh = DBI->connect($dsn, $userid, $password,{RaiseError => 1}) or die "$DB::errstr";
my $sth = $dbh->prepare("update collabuser set user_email='aravikum.wipro.com' where user_login='aravikum'") or die "$DBI::errstr";
$sth->execute() or die "couldn't execute statementn$!";
$sth->rows;
#End of Program
$sth->finish();
$dbh->disconnect();
I got the error below:
Can't load '/usr/local/lib64/perl5/auto/DBD/Oracle/Oracle.so' for module DBD::Oracle: libocci.so.11.1: cannot open shared object file: No such file or directory at /usr/lib64/perl5/DynaLoader.pm line 190.
at perlupdt.pl line 11.
Compilation failed in require at perlupdt.pl line 11.
BEGIN failed--compilation aborted at perlupdt.pl line 11.
on googling, i got an answer like running the below export commands fixed the issue. so, i executed and it worked fine,
export ORACLE_HOME=/usr/lib/oracle/11.2/client64
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib:$LD_LIBRARY_PATH
export PATH=/usr/lib/oracle/11.2/client64/bin:$PATH
But, i cant execute the above commands each and every time when i login to putty,
I decided to put this export inside the script so added these at the starting of the script,
$ENV{"ORACLE_HOME"} = '/usr/lib/oracle/11.2/client64';
$ENV{"LD_LIBRARY_PATH"} = '/usr/lib/oracle/11.2/client64/lib:$LD_LIBRARY_PATH';
$ENV{"PATH"} = '/usr/lib/oracle/11.2/client64/bin:$PATH';
But I am getting the above stated error, please suggest a solution to import these commands inside my perl script.
After setting the above export commands in the etc/profile and etc/bashrc and adding as an .sh file under etc/profile.d fixed the issue.

JSCH setCommand is not working

No Exception comes and Command is also not making any work based on command mentioned.Here permisson of directory is not created and directory is also not created.Please give your suggestion.
Update :
channelexe.getExitStatus is added but problem is it gives -1, what is the meaning of this ?. I don't know how to find some explaination why command is not doing it's job(update 777 mode of fileDir1) .
String depDir = "/usr/local/FTPReceive/DEPLOYED/fileDir1";
log.info("updateDepositedFilePermission ........ starts");
Session session = new FTPComponent().getSession("");
Channel channel = null;
ChannelSftp channelSftp = null;
try
{
session.connect();
System.out.println("session is alive:" + session.isConnected());
channel = session.openChannel("sftp");
channel.connect();
channelSftp = (ChannelSftp) channel;
ChannelExec channelexe = (ChannelExec) session.openChannel("exec");
channelexe.setCommand("chmod 777 -R " + depDir);
channelexe.connect();
System.out.println("channelexe.getExitStatus:"+channelexe.getExitStatus());
}
catch (Exception e1)
{
e1.printStackTrace();
System.out.println("Manual Exception in updateDepositedFilePermission:" + CommonUtil.getExceptionString(e1));
}
channelexe.setCommand("chmod 777 -R " + depDir);
channelexe.setCommand("mkdir /usr/local/fileStore");
channelexe.connect();
A ChannelExec accepts a single command string to invoke on the remote system. Your second call to setCommand() is discarding the chmod command and replacing it with the mkdir command. Assuming the remote shell is bash or similar, you could use shell syntax to construct a command string which runs both commands:
String cmd = "chmod 777 -R " + depDir + " && mkdir /usr/local/fileStore";
channelexe.setCommand(cmd);
No Exception comes...
ChannelExec doesn't throw an exception when a command merely fails. You can call Channel.getExitStatus() to get the exit status of the remote command. The value will be 0 if chmod and mkdir succeeded, or non-zero if they failed. The channel also has functions to read the standard error of the remote command, which will permit you to read any error messages which they output.
The JSCH website has several example programs, including an example of executing a remote command.

Gulp task to SSH and then mysqldump

So I've got this scenario where I have separate Web server and MySQL server, and I can only connect to the MySQL server from the web server.
So basically everytime I have to go like:
step 1: 'ssh -i ~/somecert.pem ubuntu#1.2.3.4'
step 2: 'mysqldump -u root -p'password' -h 6.7.8.9 database_name > output.sql'
I'm new to gulp and my aim was to create a task that could automate all this, so running one gulp task would automatically deliver me the SQL file.
This would make the developer life a lot easier since it would just take a command to download the latest db dump.
This is where I got so far (gulpfile.js):
////////////////////////////////////////////////////////////////////
// Run: 'gulp download-db' to get latest SQL dump from production //
// File will be put under the 'dumps' folder //
////////////////////////////////////////////////////////////////////
// Load stuff
'use strict'
var gulp = require('gulp')
var GulpSSH = require('gulp-ssh')
var fs = require('fs');
// Function to get home path
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
var homepath = getUserHome();
///////////////////////////////////////
// SETTINGS (change if needed) //
///////////////////////////////////////
var config = {
// SSH connection
host: '1.2.3.4',
port: 22,
username: 'ubuntu',
//password: '1337p4ssw0rd', // Uncomment if needed
privateKey: fs.readFileSync( homepath + '/certs/somecert.pem'), // Uncomment if needed
// MySQL connection
db_host: 'localhost',
db_name: 'clients_db',
db_username: 'root',
db_password: 'dbp4ssw0rd',
}
////////////////////////////////////////////////
// Core script, don't need to touch from here //
////////////////////////////////////////////////
// Set up SSH connector
var gulpSSH = new GulpSSH({
ignoreErrors: true,
sshConfig: config
})
// Run the mysqldump
gulp.task('download-db', function(){
return gulpSSH
// runs the mysql dump
.exec(['mysqldump -u '+config.db_username+' -p\''+config.db_password+'\' -h '+config.db_host+' '+config.db_name+''], {filePath: 'dump.sql'})
// pipes output into local folder
.pipe(gulp.dest('dumps'))
})
// Run search/replace "optional"
SSH into the web server runs fine, but I have an issue when trying to get the mysqldump, I'm getting this message:
events.js:85
throw er; // Unhandled 'error' event
^
Error: Warning:
If I try the same mysqldump command manually from the server SSH, I get:
Warning: mysqldump: unknown variable 'loose-local-infile=1'
Followed by the correct mylsql dump info.
So I think this warning message is messing up my script, I would like to ignore warnings in cases like this, but don't know how to do it or if it's possible.
Also I read that using the password directly in the command line is not really good practice.
Ideally, I would like to have all the config vars loaded from another file, but this is my first gulp task and not really familiar with how I would do that.
Can someone with experience in Gulp orient me towards a good way of getting this thing done? Or do you think I shouldn't be using Gulp for this at all?
Thanks!
As I suspected, that warning message was preventing the gulp task from finalizing, I got rid of it by commenting the: loose-local-infile=1 From /etc/mysql/my.cnf