Executing a script from inside code in VxWorks 6.7 - vxworks

In VxWorks 5.5.1 you could run a script using the execute command. In VxWorks 6.7 the execute command is no longer supported. Does anyone now if there is a replacement? I am specifically talking about from inside code not command line.

Through much research it appears like there are a few ways to accomplish this but none is exactly the same as the execute command from before. As I stated in the comment below it turns out that the execute command is not an official API call.
1) shellCmdExec can be used but most be called from inside the shell task.
2) The solution we choose to employ - which is to call it from within our startup script
3) And a hack way:
fd = open("/y/startup.go", 0, 0) /* open the script you want to execute /
v=shellFromNameGet("tShell0") / Get the shell i.d. */
/* Use shellinOutGet to save off the standard in of the shell /
shellInOutSet (v, fd, -1, -1) / Set the standard in of the shell to the file */
/* Here you should restore the standard in (do a shellInOutGet beforehand). Do it after the shell is done with the script. I would say that your script should increrment a variable when ti is done. */
close(fd)

There's a solution in the VxWorks Kernel programmer's guide 6.7, the problem is that it did not work for me, but it could help you:
shellGenericInit ("INTERPRETER=Cmd", 0, NULL, &shellTaskName, FALSE, FALSE,fdScript, STD_OUT, STD_ERR); do
taskDelay (sysClkRateGet ());
while (taskNameToId (shellTaskName) != ERROR); close (fdScript);
Check Section 15.2.15 of the document.

You can do it in the serial driver layer. Try the following code. It shows how to send text to the shell's input.
For example,
pass_to_sio("memShow; ifconfig"); in your c code.
-> sp pass_to_sio, "memShow; ifconfig" in the shell.
pass_to_sio("< test.scr"); in your c code if you want to run a script file.
-> sp pass_to_sio, "< test.scr" in the shell if you want to run a script file.
void pass_to_sio(char *input)
{
int old_priority;
taskPriorityGet(taskIdSelf(),&old_priority);
taskPrioritySet(taskIdSelf(),250); /* task priority must be lower than tShell0 */
NS16550_CHAN *pChan = &ns16550Chan[0]; /* this line depends on your BSP */
while (input != NULL && *input != NULL)
{
(*pChan->putRcvChar) (pChan->putRcvArg, *input);
input++;
}
(*pChan->putRcvChar) (pChan->putRcvArg, '\r');
taskPrioritySet(taskIdSelf(),old_priority);
}

Related

Asynchronous reading of an stdout

I've wrote this simple script, it generates one output line per second (generator.sh):
for i in {0..5}; do echo $i; sleep 1; done
The raku program will launch this script and will print the lines as soon as they appear:
my $proc = Proc::Async.new("sh", "generator.sh");
$proc.stdout.tap({ .print });
my $promise = $proc.start;
await $promise;
All works as expected: every second we see a new line. But let's rewrite generator in raku (generator.raku):
for 0..5 { .say; sleep 1 }
and change the first line of the program to this:
my $proc = Proc::Async.new("raku", "generator.raku");
Now something wrong: first we see first line of output ("0"), then a long pause, and finally we see all the remaining lines of the output.
I tried to grab output of the generators via script command:
script -c 'sh generator.sh' script-sh
script -c 'raku generator.raku' script-raku
And to analyze them in a hexadecimal editor, and it looks like they are the same: after each digit, bytes 0d and 0a follow.
Why is such a difference in working with seemingly identical generators? I need to understand this because I am going to launch an external program and process its output online.
Why is such a difference in working with seemingly identical generators?
First, with regard to the title, the issue is not about the reading side, but rather the writing side.
Raku's I/O implementation looks at whether STDOUT is attached to a TTY. If it is a TTY, any output is immediately written to the output handle. However, if it's not a TTY, then it will apply buffering, which results in a significant performance improvement but at the cost of the output being chunked by the buffer size.
If you change generator.raku to disable output buffering:
$*OUT.out-buffer = False; for 0..5 { .say; sleep 1 }
Then the output will be seen immediately.
I need to understand this because I am going to launch an external program and process its output online.
It'll only be an issue if the external program you launch also has such a buffering policy.
In addition to answer of #Jonathan Worthington. Although buffering is an issue of writing side, it is possible to cope with this on the reading side. stdbuf, unbuffer, script can be used on linux (see this discussion). On windows only winpty helps me, which I found here.
So, if there are winpty.exe, winpty-agent.exe, winpty.dll, msys-2.0.dll files in working directory, this code can be used to run program without buffering:
my $proc = Proc::Async.new(<winpty.exe -Xallow-non-tty -Xplain raku generator.raku>);

Using java jcabi SSH client (or other) to execute several commands in shell

I understand how to create a ssh shell
Shell ssh = new SshByPassword("192.168.1.5", 22, "admin", "password");
i also understand how to run a command
String output = new Shell.Plain(ssh).exec("some command");
and i can easly analyze the output string
but how do i send in the same "shell" one command after the other
and bonus question sometimes the commands require a user confirmation ("press Y to continue")
is it possible with the library?
Generally, most Java SSH APIs leave it to the developer to sort out the complexities of executing multiple commands within a shell. It is a complicated problem because SSH does not provide any indication of where commands start and end within the shell; the protocol only provides a stream of data, which is the raw output of the shell.
I would humbly like to introduce my project Maverick Synergy. An open-source API (LGPL) that does provide an interface for interactive shells. I documented the options for interactive commands in an article.
Here is a very basic example, the ExpectShell class allows you to execute multiple commands, each time returning a ShellProcess that encapsulates the command output. You can use the ShellProcess InputStream to read the output, it will return EOF when the command is done.
You can also use a ShellProcessController to interact with the command as this example shows.
SshClient ssh = new SshClient("localhost", 22, "lee", "xxxxxx".toCharArray());
ssh.runTask(new ShellTask(ssh) {
protected void onOpenSession(SessionChannelNG session)
throws IOException, SshException, ShellTimeoutException {
ExpectShell shell = new ExpectShell(this);
// Execute the first command
ShellProcess process = shell.executeCommand("ls -l");
process.drain();
String output = process.getCommandOutput();
// After processing output execute another
ShellProcessController controller =
new ShellProcessController(
shell.executeCommand("rm -i file.txt"));
if(controller.expect("remove")) {
controller.typeAndReturn("y");
}
controller.getProcess().drain();
}
});
ssh.disconnect();

Is it possible to enable exit on error behavior in an interactive Tcl shell?

I need to automate a huge interactive Tcl program using Tcl expect.
As I realized, this territory is really dangerous, as I need to extend the already existing mass of code, but I can't rely on errors actually causing the program to fail with a positive exit code as I could in a regular script.
This means I have to think about every possible thing that could go wrong and "expect" it.
What I currently do is use a "die" procedure instead of raising an error in my own code, that automatically exits. But this kind of error condition can not be catched, and makes it hard to detect errors especially in code not written by me, since ultimately, most library routines will be error-based.
Since I have access to the program's Tcl shell, is it possible to enable fail-on-error?
EDIT:
I am using Tcl 8.3, which is a severe limitation in terms of available tools.
Examples of errors I'd like to automatically exit on:
% puts $a(2)
can't read "a(2)": no such element in array
while evaluating {puts $a(2)}
%
% blublabla
invalid command name "blublabla"
while evaluating blublabla
%
As well as any other error that makes a normal script terminate.
These can bubble up from 10 levels deep within procedure calls.
I also tried redefining the global error command, but not all errors that can occur in Tcl use it. For instance, the above "command not found" error did not go through my custom error procedure.
Since I have access to the program's Tcl shell, is it possible to
enable fail-on-error?
Let me try to summarize in my words: You want to exit from an interactive Tcl shell upon error, rather than having the prompt offered again?
Update
I am using Tcl 8.3, which is a severe limitation in terms of available
tools [...] only source patches to the C code.
As you seem to be deep down in that rabbit hole, why not add another source patch?
--- tclMain.c 2002-03-26 03:26:58.000000000 +0100
+++ tclMain.c.mrcalvin 2019-10-23 22:49:14.000000000 +0200
## -328,6 +328,7 ##
Tcl_WriteObj(errChannel, Tcl_GetObjResult(interp));
Tcl_WriteChars(errChannel, "\n", 1);
}
+ Tcl_Exit(1);
} else if (tsdPtr->tty) {
resultPtr = Tcl_GetObjResult(interp);
Tcl_GetStringFromObj(resultPtr, &length);
This is untested, the Tcl 8.3.5 sources don't compile for me. But this section of Tcl's internal are comparable to current sources, tested using my Tcl 8.6 source installation.
For the records
With a stock shell (tclsh), this is a little fiddly, I am afraid. The following might work for you (though, I can imagine cases where this might fail you). The idea is
to intercept writes to stderr (this is to where an interactive shell redirects error messages, before returning to the prompt).
to discriminate between arbitrary writes to stderr and error cases, one can use the global variable ::errorInfo as a sentinel.
Step 1: Define a channel interceptor
oo::class create Bouncer {
method initialize {handle mode} {
if {$mode ne "write"} {error "can't handle reading"}
return {finalize initialize write}
}
method finalize {handle} {
# NOOP
}
method write {handle bytes} {
if {[info exists ::errorInfo]} {
# This is an actual error;
# 1) Print the message (as usual), but to stdout
fconfigure stdout -translation binary
puts stdout $bytes
# 2) Call on [exit] to quit the Tcl process
exit 1
} else {
# Non-error write to stderr, proceed as usual
return $bytes
}
}
}
Step 2: Register the interceptor for stderr in interactive shells
if {[info exists ::tcl_interactive]} {
chan push stderr [Bouncer new]
}
Once registered, this will make your interactive shell behave like so:
% puts stderr "Goes, as usual!"
Goes, as usual!
% error "Bye, bye"
Bye, bye
Some remarks
You need to be careful about the Bouncer's write method, the error message has already been massaged for the character encoding (therefore, the fconfigure call).
You might want to put this into a Tcl package or Tcl module, to load the bouncer using package req.
I could imagine that your program writes to stderr and the errorInfo variable happens to be set (as a left-over), this will trigger an unintended exit.

Generating .gcda coverage files via QEMU/GDB

Executive summary: I want to use GDB to extract the coverage execution counts stored in memory in my embedded target, and use them to create .gcda files (for feeding to gcov/lcov).
The setup:
I can successfully cross-compile my binary, targeting my specific embedded target - and then execute it under QEMU.
I can also use QEMU's GDB support to debug the binary (i.e. use tar extended-remote localhost:... to attach to the running QEMU GDB server, and fully control the execution of my binary).
Coverage:
Now, to perform "on-target" coverage analysis, I cross-compile with
-fprofile-arcs -ftest-coverage. GCC then emits 64-bit counters to keep track of execution counts of specific code blocks.
Under normal (i.e. host-based, not cross-compiled) execution, when the app finishes __gcov_exit is called - and gathers all these execution counts into .gcdafiles (that gcov then uses to report coverage details).
In my embedded target however, there's no filesystem to speak of - and libgcov basically contains empty stubs for all __gcov_... functions.
Workaround via QEMU/GDB: To address this, and do it in a GCC-version-agnostic way, I could list the coverage-related symbols in my binary via MYPLATFORM-readelf, and grep-out the relevant ones (e.g. __gcov0.Task1_EntryPoint, __gcov0.worker, etc):
$ MYPLATFORM-readelf -s binary | grep __gcov
...
46: 40021498 48 OBJECT LOCAL DEFAULT 4 __gcov0.Task1_EntryPoint
...
I could then use the offsets/sizes reported to automatically create a GDB script - a script that extracts the counters' data via simple memory dumps (from offset, dump length bytes to a local file).
What I don't know (and failed to find any relevant info/tool), is how to convert the resulting pairs of (memory offset,memory data) into .gcda files. If such a tool/script exists, I'd have a portable (platform-agnostic) way to do coverage on any QEMU-supported platform.
Is there such a tool/script?
Any suggestions/pointers would be most appreciated.
UPDATE: I solved this myself, as you can read below - and wrote a blog post about it.
Turned out there was a (much) better way to do what I wanted.
The Linux kernel includes portable GCOV related functionality, that abstracts away the GCC version-specific details by providing this endpoint:
size_t convert_to_gcda(char *buffer, struct gcov_info *info)
So basically, I was able to do on-target coverage via the following steps:
Step 1
I added three slightly modified versions of the linux gcov files to my project: base.c, gcc_4_7.c and gcov.h. I had to replace some linux-isms inside them - like vmalloc,kfree, etc - to make the code portable (and thus, compileable on my embedded platform, which has nothing to do with Linux).
Step 2
I then provided my own __gcov_init...
typedef struct tagGcovInfo {
struct gcov_info *info;
struct tagGcovInfo *next;
} GcovInfo;
GcovInfo *headGcov = NULL;
void __gcov_init(struct gcov_info *info)
{
printf(
"__gcov_init called for %s!\n",
gcov_info_filename(info));
fflush(stdout);
GcovInfo *newHead = malloc(sizeof(GcovInfo));
if (!newHead) {
puts("Out of memory!");
exit(1);
}
newHead->info = info;
newHead->next = headGcov;
headGcov = newHead;
}
...and __gcov_exit:
void __gcov_exit()
{
GcovInfo *tmp = headGcov;
while(tmp) {
char *buffer;
int bytesNeeded = convert_to_gcda(NULL, tmp->info);
buffer = malloc(bytesNeeded);
if (!buffer) {
puts("Out of memory!");
exit(1);
}
convert_to_gcda(buffer, tmp->info);
printf("Emitting %6d bytes for %s\n", bytesNeeded, gcov_info_filename(tmp->info));
free(buffer);
tmp = tmp->next;
}
}
Step 3
Finally, I scripted my GDB (driving QEMU remotely) via this:
$ cat coverage.gdb
tar extended-remote :9976
file bin.debug/fputest
b base.c:88 <================= This breaks on the "Emitting" printf in __gcov_exit
commands 1
silent
set $filename = tmp->info->filename
set $dataBegin = buffer
set $dataEnd = buffer + bytesNeeded
eval "dump binary memory %s 0x%lx 0x%lx", $filename, $dataBegin, $dataEnd
c
end
c
quit
And finally, executed both QEMU and GDB - like this:
$ # In terminal 1:
qemu-system-MYPLATFORM ... -kernel bin.debug/fputest -gdb tcp::9976 -S
$ # In terminal 2:
MYPLATFORM-gdb -x coverage.gdb
...and that's it - I was able to generate the .gcda files in my local filesystem, and then see coverage results over gcov and lcov.
UPDATE: I wrote a blog post showing the process in detail.

Redirecting to stdin in order to execute script in vxworks 6.7

I need to execute a script in vxWorks 6.7. It can be done with the execute() function in vxworks 5.5. The solution that I am applying is to use stdin redirection as in the following code:
newStdIn = open("myScript.txt",O_RDONLY,0644);
oldStdIn=ioGlobalStdGet(STD_IN);
ioGlobalStdSet(STD_IN, newStdIn);
/*Read file here and execute*/
ioGlobalStdSet(STD_IN,oldStdIn); /*Restore old stdIn*/
close(newStdIn);
I am missing the read and execute part (where the comment is).
EDIT:
According to the vxworks kernel programmers guide, the way to execute a script is as follows:
fdScript = open ("myScript", O_RDONLY);
shellGenericInit ("INTERPRETER=Cmd", 0, NULL, &shellTaskName, FALSE, FALSE, fdScript, STD_OUT, STD_ERR);
do
taskDelay (sysClkRateGet ());
while (taskNameToId (shellTaskName) != ERROR);
close (fdScript);
But it will open a new shell without processing the script.
The problem with this is that my application won't do anything after calling shellGenericInit.
It looks like you are starting a Cmd shell, rather than a C Interpreter shell.
Since you mention that you used the execute() function in 5.5, I assume that your script is for the C Interpreter.
Try changing "INTERPRETER=Cmd" to "INTERPRETER=C".