jsch ChannelExec run a .sh script with nohup "lose" some commands - ssh

I hava a .sh script which glues many other scripts, called by jsch ChannelExec from a windows application.
Channel channel = session.openChannel("exec");
((ChannelExec) channel).setCommand("/foo/bar/foobar.sh");
channel.connect();
if I run the command like "nohup /foo/bar/foobar.sh >> /path/to/foo.log &", all the long term jobs(database operations, file processing etc.) seems to get lost.
checking the log file, only find those echo stuffs(before and after a long term operation, calculate running time etc.).
I checked the permissions, $PATH, add source /etc/profile to my .sh yet none of these works.
but when I run the command normally (sync run, print all echo outputs to my java client on windows),all the things goes well.
it's a really specific prob.
Hope someone with experience can help me out.
thank in advance.
Han

Solved!
A different issue that often arises in
this situation is that ssh is refusing
to log off ("hangs"), since it refuses
to lose any data from/to the
background job(s).[6][7] This problem
can also be overcome by redirecting
all three I/O streams.
from
http://en.wikipedia.org/wiki/Nohup
My prob is, psql and pg_bulkload print their outputs to err stream.
In my script, I didn't redirect err streams.
Everything went fine by also redirecting err streams to the same log file.
nohup foo.sh > log.log 2>&1 &
Thanks to Atsuhiko Yamanaka, he created a great JSch library, and PaĆ­lo Ebermann for the documentation.

Related

Redirecting output of bluez btmgmt to file from systemd service

I try to have separate ssp modes during connection using Bluetooth btmgmt utility. Basic idea is scan current device OUI and select ssp on/off modes. But I can't get any answer from neither btmgmt con or btmgmt info commands when I put them into .service files. My system is Arch Linux arm 32-bit and bluez stack version is 5.55-1. I tried
[Unit]
Description=check Bluetooth address
[Service]
Type=oneshot
ExecStart=/usr/bin/bash -c '/usr/bin/btmgmt info >> /usr/local/lib/mac 2>&1'
without any success: it just puts nothing in output file. Some tricks like add
User=root
Group=root
or substitute ExecStart with
ExecStart=/bin/bash -c 'echo -e "$(btmgmt con)" >> /usr/local/lib/mac 2>&1
did nothing. I tried changed thing by putting btmgmt stuff in different bash script instead of start them right from service file, i.e.
ExecStart=/usr/local/lib/test1
to no avail. I'm confused completely, because of:
It doesn't seem to be general btmgmt thing problem, because I can set ssp mode from service files
in very simple manner, just using
ExecStart=btmgmt off
or
ExecStart=btmgmt off
even without full path.
It doesn't seem to be redirecting command error as well, because if I add
ExecStartPre=/usr/bin/bash -c '/usr/bin/fdisk -l > /usr/local/lib/mac 2>&1'
it does work without any problem and I see fdisk info in file (I use fdisk because it requires elevated rights same as btmgmt one).
Moreover, btmgmt info works in the same way, i.e. shows nothing in out file. It makes me think something is wrong in output of btmgmt. I talk about output because input parameters work fine in btmgmt ssp on/of commands and journalctl and systemct don't show any errors in btmgmt con/info cases, so it seems to like output generating successfully but then sending somewhere to outer space, but I'm not sure completely.
Thanks for any help in advance
Well, I make a statement: btmgmt is quite intended to be used for output at any manner, but they tried make a zillion variants of output which directed to exact result it should be: bug is buried somewhere deep inside of bt_shell_printf and struct data. I'm not ready run through all that disgusting documented code (I mean total absence of comments or more-or-less satisfactory mans), so I ended up with a liitte bit hacked version of utility downgraded to simple fprintf in output, leaving only commands I need: con, ssp, power and simplified until only current setting printing info. Everything works fine, and I can use that kind of btmgmt not with systemd only, but even with udev rules (indirectly, of course)
I know this is an old question, but I had the same problem, and managed to fix it after hours and hours of trying.
I don't know why in the world this happens to be so hard with btmgmt but here's the fix:
[Unit]
After=bluetooth.service
Description=Bluetooth service
[Service]
ExecStart=<your-process>
Group=root
StandardInput=tty
TTYPath=/dev/tty2
TTYReset=yes
TTYVHangup=yes
Type=simple
User=root
Basically, by occupying a TTY, btmgmt will think it's running in an interactive terminal, and will output as usual.
Hope this saves anyone the hell I've been through!

SSH command step not working for one command - in Jmeter

I have a unique problem using jmeter SSH command.
I use this step to run spark jobs.
the problem is that one of the commands not working, to clarify it connects and not get response and just wait and wait for hours, and nothing displayed on screen.
I know how to work with the tool, and this behavior is special for this script alone.
All other script worked, I duplicate one that worked for example
sudo /run_stg.sh this command worked
sudo /run_off2-stg.sh this command not worked
if I run the job manually via jenkins it worked
if I entered to command line and use plik ssh it worked,
the problem is just Jmeter, that is waiting and waiting and I can not understand for what?
the job is about 3 minutes, and I wait for response in Jmeter for 4 hours and nothing Jmeter just waiting.
in the console log I set to trace level and nothing, absolutely no idea how to start handle this issue in Jmeter.
an anyone please assists how to make Jmeter to write what happened?
or just to know if he connect or anything
since this behavior all the test can not be performed
Most probably you are as usual misconfiguring the SSH Command sampler.
The idea is not to run the script per se, you need to delegate the script execution to the Unix Shell, for example Bash this way you will be able to combine several commands together, see the output, amend debugging level, etc.
So I would recommend setting your command to something like /bin/bash -c -x /your/script.sh
Another guess, given you use sudo it might be the case that the sudo command simply waits for the password (which JMeter never provides), if this is the case try amending your script permissions using chmod command and allowing your user its execution without root privileges.
And finally, given you're able to run your command using "plik ssh" (whatever it is) you can run it using OS Process Sampler
More information: How to Run External Commands and Programs Locally and Remotely from JMeter

Endeca Forge failure move_dgraph-input_to_dgraph-input-old

I have an intermittent issue when distributing indexes :( All servers are Windows Server 2008.
I have two servers to distribute to and one of them has failed twice with this error:
INFO: [MDEXHost1] Starting shell utility 'move_dgraph-input_to_dgraph-input-old'.
10-Jun-2015 06:08:36 com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript
SEVERE: Utility 'move_dgraph-input_to_dgraph-input-old' failed.
With a bit of further digging I've found this error in a log file in the PlatformServices\workspace\logs\shell folder:
Failed to move D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input to
D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old: No such file or directory at -e line 1.
The state of the server is that is has a dgraph_input_new folder but it's struggling to create the dgraph_input_old folder. The dgraph_input folder does exist so the 'No such file or directory' is interesting.
The server has plenty of disk space for the operation and as it's intermittent I don't think it's file/folder permissions (otherwise it would fail all the time). I've even asked for on-access virus scanning to be disabled for those folders in case our virus scanner was locking files/folders.
I'm struggling to come up with a resolution to the issue, HALP!
EDIT: The forge process did stop the dgraph but the TomCat6 process is still running. Is that normal? Could TomCat be locking the folder?
EDIT: The task to move the folder is a bit of Perl that looks like this:
perl.exe -e "use strict; use File::Spec; use File::Copy; use File::Glob qw/:glob/;my $source = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input'; $source =~ s/[\\\/]+$//;my #sources = bsd_glob($source); foreach my $file (#sources) {my #fromPath = File::Spec->splitdir($file); if (scalar #fromPath eq 0) { die \"Failed to split path: $!\"; } my $fromRelative = #fromPath[scalar #fromPath - 1];my $toFile = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old'; if ( -d $toFile ) { $toFile =File::Spec->catdir($toFile, $fromRelative); } my $res = move($file, $toFile);if (! $res) { die \"Failed to move $file to$toFile: $!\"; }}"
EDIT: It seems to be a plain permission issue, I can't rename the folder without elevating myself to an administrator. The service is running as a user who's in the Administrators group.
What could happen to make this folder admin only?
I know this question is dated back 3 years ago. But recently i experienced the same issue, thought it could help others.
The logs were interesting as GogLlundain pointed out.
The way to solve this is.
Stop mdex server in the workbench which will in parallel kill
dgraph process too.
If you open the task manager in the server where mdex is defined, you can
find two dgraph.exe is running.
Kill the older task(i.e dgraph.exe) then run the baseline script, Your
process will run smoothly.

ssh scripting and copying files

I am writing a BASH deployment script on RH 5. Script runs great and send out an email at the end of the script run. However, what I need to do is, at the end of the script, if I detect any failure, I need to copy log files back local server to attach to the email.
Script can detect failure fine, how to copy log files back. I don't want to just cat the log files as they can be huge.
Any suggestions?
Thanks
S
If I understand correctly your problem, you should use scp
http://linux.die.net/man/1/scp
and here you can find how to automate the login so you can use it in a script
http://linuxproblem.org/art_9.html
I can't see any easy way of avoiding a second login with scp/sftp. If you're sure that it's only the log file that will be returned you could do something like the following:
ssh -e none REMOTE SCRIPT | gzip -dc > LOGFILE
Inside SCRIPT you have something like gzip -c LOGFILE when if fails.

How to run a PHP script via SSH and keep it running after I quit

I'm trying to restart a custom IRC bot. I tried various commands :
load.php
daemon load.php
daemon load.php &&
But that makes the script execute inside the console (I see all the output) and when I quit the bot quits as well.
The bot author only taught me the IRC commands so I'm a bit lost.
You can install a package called screen. Then, run screen -dm php load.php and resume with screen -dR
This will allow you to run the script in the background, and still be able to use your current SSH terminal. You can also logout and the process will still be running.
Chances are good the shell is sending the HUP signal to all its running children when you log out to indicate that "the line has been hung up" (a plain old telephone system modem reference to a line being "hung up" when disconnected. You know, because you "hang" the handset on the hook...)
The HUP signal will ask all programs to die conveniently.
Try this:
nohup load.php &
The nohup asks for the next program executed to ignore the HUP signal. See signal(7) and the nohup(1) manpages for details. The & asks the shell to execute the program in the background.
Clay's answer of using screen(1) is pretty awesome, definitely look into screen(1) or tmux(1), but I don't think that they are necessary for this problem.
This line might help you
php load.php &