How do I run a script on VxWorks Tornado Shell? - vxworks

I am trying to run a script on VxWorks Shell, which will load a module.
I use a Perl script to telnet into the system, login and get access to the shell.
I am able to run the basic commands like 'i', 'time', 'ls' 'pwd' and 'h' and so on.
But I would like to run a script, say 'test.o'.
If I do : <C:\Path\subfolder\test.o the script file WILL run from, the TORNADO Shell.
But I have connected to using Telnet using Perl.
So I connect this way:
use Net::Telnet;
my $username = "username";
my $password = "password";
my $t = new Net::Telnet(Timeout=>10, Errmode=>'die');
$t->open('10.42.177.123');
$t->login($username,$password); # Logins as expected.
my #lines = $t->cmd('i'); # To test
print #lines # This works
#lines = $t->cmd('<C:\\Path\\Subfolder\\test.o'); # This is not working for me. HELP!
print #lines; # Prints the Error below
I get an error saying :
Unknown directory: /C:\Path\Subfolder
can't open input 'C:\Path\Subfolder\test.o
errno = 0x1f5
-
How do I run my script file if it is residing at a particular folder of the host PC?
I am able to run the script manually from the TORNADO SHELL window where the prompt looks like ->. and hence it is a working script. And as I have said, I am able to run and print the basic VxWorks Shell commands ("build-in functions").
Any help? [ My OS is Win7 ]
Thanks!

This is issue is now resolved. Two issues was there, and one was because TORNADO, another VxWorks Client was also logged into the system at the same time, while I am trying run my perl script which sends commands and do instructions using Telnet, and having two clients (Tornado, and my scripts Telnet session) running at the same time (despite the VxWorks OS running on the Embedded system having TelnetDeamon running) it didn't like it.
As for the Error above, why it didn't work and gave an error was a syntax error. I should have used
$t->cmd('<\\Path\\subfolder\\test.o');
No need to give C:

Related

SSH connection command to embedded OS QNX Neutrino via paramiko [duplicate]

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

Execute a shell command outside of a sandbox while in a sandbox

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

SSH with command Bat file [duplicate]

I have a scenario where I need to run a linux shell command frequently (with different filenames) from windows. I am using PuTTY and WinSCP to do that (requires login name and password). The file is copied to a predefined folder in the linux machine through WinSCP and then the command is run from PuTTY. Is there a way by which I can automate this through a program. Ideally I would like to right click the file from windows and issue the command which would copy the file to remote machine and run the predefined command (in PuTTy) with the filename as argument.
Putty usually comes with the "plink" utility.
This is essentially the "ssh" command line command implemented as a windows .exe.
It pretty well documented in the putty manual under "Using the command line tool plink".
You just need to wrap a command like:
plink root#myserver /etc/backups/do-backup.sh
in a .bat script.
You can also use common shell constructs, like semicolons to execute multiple commands. e.g:
plink read#myhost ls -lrt /home/read/files;/etc/backups/do-backup.sh
There could be security issues with common methods for auto-login.
One of the most easiest ways is documented below:
Running Putty from the Windows Command Line
And as for the part the executes the command
In putty UI, Connection>SSH> there's a field for remote command.
4.17 The SSH panel
The SSH panel allows you to configure
options that only apply to SSH
sessions.
4.17.1 Executing a specific command on the server
In SSH, you don't have to run a
general shell session on the server.
Instead, you can choose to run a
single specific command (such as a
mail user agent, for example). If you
want to do this, enter the command in
the "Remote command" box.
http://the.earth.li/~sgtatham/putty/0.53/htmldoc/Chapter4.html
in short, your answers might just as well be similar to the text below:
let Putty run command in remote server
You can write a TCL script and establish SSH session to that Linux machine and issue commands automatically. Check http://wiki.tcl.tk/11542 for a short tutorial.
You can create a putty session, and auto load the script on the server, when starting the session:
putty -load "sessionName"
At remote command, point to the remote script.
You can do both tasks (the upload and the command execution) using WinSCP. Use WinSCP script like:
option batch abort
option confirm off
open your_session
put %1%
call script.sh
exit
Reference for the call command:
https://winscp.net/eng/docs/scriptcommand_call
Reference for the %1% syntax:
https://winscp.net/eng/docs/scripting#syntax
You can then run the script like:
winscp.exe /console /script=script_path\upload.txt /parameter file_to_upload.dat
Actually, you can put a shortcut to the above command to the Windows Explorer's Send To menu, so that you can then just right-click any file and go to the Send To > Upload using WinSCP and Execute Remote Command (=name of the shortcut).
For that, go to the folder %USERPROFILE%\SendTo and create a shortcut with the following target:
winscp_path\winscp.exe /console /script=script_path\upload.txt /parameter %1
See Creating entry in Explorer's "Send To" menu.
Here is a totally out of the box solution.
Install AutoHotKey (ahk)
Map the script to a key (e.g. F9)
In the ahk script,
a) Ftp the commands (.ksh) file to the linux machine
b) Use plink like below. Plink should be installed if you have putty.
plink sessionname -l username -pw password test.ksh
or
plink -ssh example.com -l username -pw password test.ksh
All the steps will be performed in sequence whenever you press F9 in windows.
Code:
using System;
using System.Diagnostics;
namespace playSound
{
class Program
{
public static void Main(string[] args)
{
Console.WriteLine(args[0]);
Process amixerMediaProcess = new Process();
amixerMediaProcess.StartInfo.CreateNoWindow = false;
amixerMediaProcess.StartInfo.UseShellExecute = false;
amixerMediaProcess.StartInfo.ErrorDialog = false;
amixerMediaProcess.StartInfo.RedirectStandardOutput = false;
amixerMediaProcess.StartInfo.RedirectStandardInput = false;
amixerMediaProcess.StartInfo.RedirectStandardError = false;
amixerMediaProcess.EnableRaisingEvents = true;
amixerMediaProcess.StartInfo.Arguments = string.Format("{0}","-ssh username#"+args[0]+" -pw password -m commands.txt");
amixerMediaProcess.StartInfo.FileName = "plink.exe";
amixerMediaProcess.Start();
Console.Write("Presskey to continue . . . ");
Console.ReadKey(true);
}
}
}
Sample commands.txt:
ps
Link: https://huseyincakir.wordpress.com/2015/08/27/send-commands-to-a-remote-device-over-puttyssh-putty-send-command-from-command-line/
Try MtPutty,
you can automate the ssh login in it. Its a great tool especially if you need to login to multiple servers many times. Try it here
Another tool worth trying is TeraTerm. Its really easy to use for the ssh automation stuff. You can get it here. But my favorite one is always MtPutty.
In case you are using Key based authentication, using saved Putty session seems to work great, for example to run a shell script on a remote server(In my case an ec2).Saved configuration will take care of authentication.
C:\Users> plink saved_putty_session_name path_to_shell_file/filename.sh
Please remember if you save your session with name like(user#hostname), this command would not work as it will be treated as part of the remote command.

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.

Reading profile script in non-interactive mode with AIX implementation of ksh

Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.