Redirecting to stdin in order to execute script in vxworks 6.7 - vxworks

I need to execute a script in vxWorks 6.7. It can be done with the execute() function in vxworks 5.5. The solution that I am applying is to use stdin redirection as in the following code:
newStdIn = open("myScript.txt",O_RDONLY,0644);
oldStdIn=ioGlobalStdGet(STD_IN);
ioGlobalStdSet(STD_IN, newStdIn);
/*Read file here and execute*/
ioGlobalStdSet(STD_IN,oldStdIn); /*Restore old stdIn*/
close(newStdIn);
I am missing the read and execute part (where the comment is).
EDIT:
According to the vxworks kernel programmers guide, the way to execute a script is as follows:
fdScript = open ("myScript", O_RDONLY);
shellGenericInit ("INTERPRETER=Cmd", 0, NULL, &shellTaskName, FALSE, FALSE, fdScript, STD_OUT, STD_ERR);
do
taskDelay (sysClkRateGet ());
while (taskNameToId (shellTaskName) != ERROR);
close (fdScript);
But it will open a new shell without processing the script.
The problem with this is that my application won't do anything after calling shellGenericInit.

It looks like you are starting a Cmd shell, rather than a C Interpreter shell.
Since you mention that you used the execute() function in 5.5, I assume that your script is for the C Interpreter.
Try changing "INTERPRETER=Cmd" to "INTERPRETER=C".

Related

Asynchronous reading of an stdout

I've wrote this simple script, it generates one output line per second (generator.sh):
for i in {0..5}; do echo $i; sleep 1; done
The raku program will launch this script and will print the lines as soon as they appear:
my $proc = Proc::Async.new("sh", "generator.sh");
$proc.stdout.tap({ .print });
my $promise = $proc.start;
await $promise;
All works as expected: every second we see a new line. But let's rewrite generator in raku (generator.raku):
for 0..5 { .say; sleep 1 }
and change the first line of the program to this:
my $proc = Proc::Async.new("raku", "generator.raku");
Now something wrong: first we see first line of output ("0"), then a long pause, and finally we see all the remaining lines of the output.
I tried to grab output of the generators via script command:
script -c 'sh generator.sh' script-sh
script -c 'raku generator.raku' script-raku
And to analyze them in a hexadecimal editor, and it looks like they are the same: after each digit, bytes 0d and 0a follow.
Why is such a difference in working with seemingly identical generators? I need to understand this because I am going to launch an external program and process its output online.
Why is such a difference in working with seemingly identical generators?
First, with regard to the title, the issue is not about the reading side, but rather the writing side.
Raku's I/O implementation looks at whether STDOUT is attached to a TTY. If it is a TTY, any output is immediately written to the output handle. However, if it's not a TTY, then it will apply buffering, which results in a significant performance improvement but at the cost of the output being chunked by the buffer size.
If you change generator.raku to disable output buffering:
$*OUT.out-buffer = False; for 0..5 { .say; sleep 1 }
Then the output will be seen immediately.
I need to understand this because I am going to launch an external program and process its output online.
It'll only be an issue if the external program you launch also has such a buffering policy.
In addition to answer of #Jonathan Worthington. Although buffering is an issue of writing side, it is possible to cope with this on the reading side. stdbuf, unbuffer, script can be used on linux (see this discussion). On windows only winpty helps me, which I found here.
So, if there are winpty.exe, winpty-agent.exe, winpty.dll, msys-2.0.dll files in working directory, this code can be used to run program without buffering:
my $proc = Proc::Async.new(<winpty.exe -Xallow-non-tty -Xplain raku generator.raku>);

Using java jcabi SSH client (or other) to execute several commands in shell

I understand how to create a ssh shell
Shell ssh = new SshByPassword("192.168.1.5", 22, "admin", "password");
i also understand how to run a command
String output = new Shell.Plain(ssh).exec("some command");
and i can easly analyze the output string
but how do i send in the same "shell" one command after the other
and bonus question sometimes the commands require a user confirmation ("press Y to continue")
is it possible with the library?
Generally, most Java SSH APIs leave it to the developer to sort out the complexities of executing multiple commands within a shell. It is a complicated problem because SSH does not provide any indication of where commands start and end within the shell; the protocol only provides a stream of data, which is the raw output of the shell.
I would humbly like to introduce my project Maverick Synergy. An open-source API (LGPL) that does provide an interface for interactive shells. I documented the options for interactive commands in an article.
Here is a very basic example, the ExpectShell class allows you to execute multiple commands, each time returning a ShellProcess that encapsulates the command output. You can use the ShellProcess InputStream to read the output, it will return EOF when the command is done.
You can also use a ShellProcessController to interact with the command as this example shows.
SshClient ssh = new SshClient("localhost", 22, "lee", "xxxxxx".toCharArray());
ssh.runTask(new ShellTask(ssh) {
protected void onOpenSession(SessionChannelNG session)
throws IOException, SshException, ShellTimeoutException {
ExpectShell shell = new ExpectShell(this);
// Execute the first command
ShellProcess process = shell.executeCommand("ls -l");
process.drain();
String output = process.getCommandOutput();
// After processing output execute another
ShellProcessController controller =
new ShellProcessController(
shell.executeCommand("rm -i file.txt"));
if(controller.expect("remove")) {
controller.typeAndReturn("y");
}
controller.getProcess().drain();
}
});
ssh.disconnect();

Is it possible to enable exit on error behavior in an interactive Tcl shell?

I need to automate a huge interactive Tcl program using Tcl expect.
As I realized, this territory is really dangerous, as I need to extend the already existing mass of code, but I can't rely on errors actually causing the program to fail with a positive exit code as I could in a regular script.
This means I have to think about every possible thing that could go wrong and "expect" it.
What I currently do is use a "die" procedure instead of raising an error in my own code, that automatically exits. But this kind of error condition can not be catched, and makes it hard to detect errors especially in code not written by me, since ultimately, most library routines will be error-based.
Since I have access to the program's Tcl shell, is it possible to enable fail-on-error?
EDIT:
I am using Tcl 8.3, which is a severe limitation in terms of available tools.
Examples of errors I'd like to automatically exit on:
% puts $a(2)
can't read "a(2)": no such element in array
while evaluating {puts $a(2)}
%
% blublabla
invalid command name "blublabla"
while evaluating blublabla
%
As well as any other error that makes a normal script terminate.
These can bubble up from 10 levels deep within procedure calls.
I also tried redefining the global error command, but not all errors that can occur in Tcl use it. For instance, the above "command not found" error did not go through my custom error procedure.
Since I have access to the program's Tcl shell, is it possible to
enable fail-on-error?
Let me try to summarize in my words: You want to exit from an interactive Tcl shell upon error, rather than having the prompt offered again?
Update
I am using Tcl 8.3, which is a severe limitation in terms of available
tools [...] only source patches to the C code.
As you seem to be deep down in that rabbit hole, why not add another source patch?
--- tclMain.c 2002-03-26 03:26:58.000000000 +0100
+++ tclMain.c.mrcalvin 2019-10-23 22:49:14.000000000 +0200
## -328,6 +328,7 ##
Tcl_WriteObj(errChannel, Tcl_GetObjResult(interp));
Tcl_WriteChars(errChannel, "\n", 1);
}
+ Tcl_Exit(1);
} else if (tsdPtr->tty) {
resultPtr = Tcl_GetObjResult(interp);
Tcl_GetStringFromObj(resultPtr, &length);
This is untested, the Tcl 8.3.5 sources don't compile for me. But this section of Tcl's internal are comparable to current sources, tested using my Tcl 8.6 source installation.
For the records
With a stock shell (tclsh), this is a little fiddly, I am afraid. The following might work for you (though, I can imagine cases where this might fail you). The idea is
to intercept writes to stderr (this is to where an interactive shell redirects error messages, before returning to the prompt).
to discriminate between arbitrary writes to stderr and error cases, one can use the global variable ::errorInfo as a sentinel.
Step 1: Define a channel interceptor
oo::class create Bouncer {
method initialize {handle mode} {
if {$mode ne "write"} {error "can't handle reading"}
return {finalize initialize write}
}
method finalize {handle} {
# NOOP
}
method write {handle bytes} {
if {[info exists ::errorInfo]} {
# This is an actual error;
# 1) Print the message (as usual), but to stdout
fconfigure stdout -translation binary
puts stdout $bytes
# 2) Call on [exit] to quit the Tcl process
exit 1
} else {
# Non-error write to stderr, proceed as usual
return $bytes
}
}
}
Step 2: Register the interceptor for stderr in interactive shells
if {[info exists ::tcl_interactive]} {
chan push stderr [Bouncer new]
}
Once registered, this will make your interactive shell behave like so:
% puts stderr "Goes, as usual!"
Goes, as usual!
% error "Bye, bye"
Bye, bye
Some remarks
You need to be careful about the Bouncer's write method, the error message has already been massaged for the character encoding (therefore, the fconfigure call).
You might want to put this into a Tcl package or Tcl module, to load the bouncer using package req.
I could imagine that your program writes to stderr and the errorInfo variable happens to be set (as a left-over), this will trigger an unintended exit.

System command executes but is immediately backgrounded

I'm attempting to use the TProcess unit to execute ssh to connect to one of my servers and provide me with the shell. It's a rewrite of one I had in Ruby as the execution time for Ruby is very slow. When I run my Process.Execute function, I am presented with the shell but it is immediately backgrounded. Running pgrep ssh reveals that it is running but I have no access to it whatsoever, using fg does not bring it back. The code is as follows for this segment:
if HasOption('c', 'connect') then begin
TempFile:= GetRecord(GetOptionValue('c', 'connect'));
AProcess:= TProcess.Create(nil);
AProcess.Executable:= '/usr/bin/ssh';
AProcess.Parameters.Add('-p');
AProcess.Parameters.Add(TempFile.Port);
AProcess.Parameters.Add('-ntt');
AProcess.Parameters.Add(TempFile.Username + '#' + TempFile.Address);
AProcess.Options:= [];
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
Terminate;
Exit;
end;
TempFile is a variable of type TProfile, which is a record containing information about the server. The cataloging system and retrieval works fine, but pulling up the shell does not.
...
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
...
You're starting the process but not waiting for it to exit. This is from the documentation on Execute:
Execute actually executes the program as specified in CommandLine, applying as much as of the specified options as supported on the current platform.
If the poWaitOnExit option is specified in Options, then the call will only return when the program has finished executing (or if an error occured). If this option is not given, the call returns immediatly[sic], but the WaitOnExit call can be used to wait for it to close, or the Running call can be used to check whether it is still running.
You should set the poWaitOnExit option in options before calling Execute, so that Execute will block until the process exits. Or else call AProcess.WaitOnExit to explicitly wait for the process to exit.

Executing a script from inside code in VxWorks 6.7

In VxWorks 5.5.1 you could run a script using the execute command. In VxWorks 6.7 the execute command is no longer supported. Does anyone now if there is a replacement? I am specifically talking about from inside code not command line.
Through much research it appears like there are a few ways to accomplish this but none is exactly the same as the execute command from before. As I stated in the comment below it turns out that the execute command is not an official API call.
1) shellCmdExec can be used but most be called from inside the shell task.
2) The solution we choose to employ - which is to call it from within our startup script
3) And a hack way:
fd = open("/y/startup.go", 0, 0) /* open the script you want to execute /
v=shellFromNameGet("tShell0") / Get the shell i.d. */
/* Use shellinOutGet to save off the standard in of the shell /
shellInOutSet (v, fd, -1, -1) / Set the standard in of the shell to the file */
/* Here you should restore the standard in (do a shellInOutGet beforehand). Do it after the shell is done with the script. I would say that your script should increrment a variable when ti is done. */
close(fd)
There's a solution in the VxWorks Kernel programmer's guide 6.7, the problem is that it did not work for me, but it could help you:
shellGenericInit ("INTERPRETER=Cmd", 0, NULL, &shellTaskName, FALSE, FALSE,fdScript, STD_OUT, STD_ERR); do
taskDelay (sysClkRateGet ());
while (taskNameToId (shellTaskName) != ERROR); close (fdScript);
Check Section 15.2.15 of the document.
You can do it in the serial driver layer. Try the following code. It shows how to send text to the shell's input.
For example,
pass_to_sio("memShow; ifconfig"); in your c code.
-> sp pass_to_sio, "memShow; ifconfig" in the shell.
pass_to_sio("< test.scr"); in your c code if you want to run a script file.
-> sp pass_to_sio, "< test.scr" in the shell if you want to run a script file.
void pass_to_sio(char *input)
{
int old_priority;
taskPriorityGet(taskIdSelf(),&old_priority);
taskPrioritySet(taskIdSelf(),250); /* task priority must be lower than tShell0 */
NS16550_CHAN *pChan = &ns16550Chan[0]; /* this line depends on your BSP */
while (input != NULL && *input != NULL)
{
(*pChan->putRcvChar) (pChan->putRcvArg, *input);
input++;
}
(*pChan->putRcvChar) (pChan->putRcvArg, '\r');
taskPrioritySet(taskIdSelf(),old_priority);
}