Trying to execute script through tclsh + sftp - ssh

For context i am trying to run a script that is hosted on a server made with TCL.
To run this script i log in a Cisco router and use the command:
tclsh tftp://[server_ip]/location/script.tcl
and it works over tftp, but when i try with sftp it does not work, says "No such file or directory", even tho i can copy the file and that works.
the command i use to cope the file is
copy sftp: flash:[server_ip]/location/script.tcl
so the router supports sftp.
and to try to run it i do exactly like tftp:
tclsh sftp://[server_ip]/location/script.tcl
to no avail.
So the copy works but executing the script does not work.
Any idea on how to fix this?
EDIT: I also tried SCP to no avail

Related

ConEmu + WSL: Open new console in current tab directory

I'm using WSL and ConEmu build 180506. I'm trying to setup a task in ConEmu to use the current directory of the active tab when opening a new console but I cannot get it to work.
What I did is to setup the task {Bash: bash} using the instructions on this page
setting the task command as :
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -C~ -cur_console:pm:/mnt
Then following the instruction on this page, I added to my .bashrc
if [[ -n "${ConEmuPID}" ]]; then
PS1="$PS1\[\e]9;9;\"\w\"\007\e]9;12\007\]"
fi
and finally setup a shortcut using the macro :
Shell("new_console", "{bash}", "", "%CD%")
But it always open the new console in the default directory ('/home/[username]').
I don't understand what I'm not doing right.
I also noticed that a lot of environment variables listed here are not set. Basically, only $ConEmuPID and $ConEmuBuild seem to be set.
Any help would be appreciated.
GuiMacro Shell was intended to run certain commands, not tasks.
You think you may try to run macro Task("{bash}","%CD%")
Or set your {bash} task parameters to -dir %CD% and just set hotkey for your task.
Of course both methods require working CD acquisition from shell. Seems like it's OK in your case - %d shows proper folder.
I found the answer:
Shell("new_console:I", "bash.exe", "", "%CD%")
The readme is actually pretty good: https://github.com/cmderdev/cmder/blob/master/README.md

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.

Reading profile script in non-interactive mode with AIX implementation of ksh

Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.

How to pass argument to another jenkins build step

In Jenkins job i need to execute shell on jenkins machine. I use Execute shell step from there i get value, which have to be used in another step Execute shell script on remote host using ssh. I can't find way to pass that value...
Here is my jenkins job configuration:
I appreciate any help.
Content of the file which constains the script:
echo "VERSION=`/data/...`"> env.properties
Path of the properties file:
env.properties
In shell script using ssh:
echo"... $VERSION"
Something like this works for the build, maybe here works too.

ssh scripting and copying files

I am writing a BASH deployment script on RH 5. Script runs great and send out an email at the end of the script run. However, what I need to do is, at the end of the script, if I detect any failure, I need to copy log files back local server to attach to the email.
Script can detect failure fine, how to copy log files back. I don't want to just cat the log files as they can be huge.
Any suggestions?
Thanks
S
If I understand correctly your problem, you should use scp
http://linux.die.net/man/1/scp
and here you can find how to automate the login so you can use it in a script
http://linuxproblem.org/art_9.html
I can't see any easy way of avoiding a second login with scp/sftp. If you're sure that it's only the log file that will be returned you could do something like the following:
ssh -e none REMOTE SCRIPT | gzip -dc > LOGFILE
Inside SCRIPT you have something like gzip -c LOGFILE when if fails.