I'm trying to send a signal via the command line on an OpenVMS server. Using Perl I have set up signal handlers between processes and Perl on VMS is able to send Posix signals. In addition, C++ programs are able to send and handle signals too. However, the problem I run into is that the processes could be running on another node in the cluster and I need to write a utility script to remotely send a signal to them.
I'm trying to avoid writing a new script and would rather simply execute a command remotely to send the signal from the command line. I need to send SIGUSR1, which translates to C$_SIGUSR1 for OpenVMS.
Thanks.
As far as I know, there is no supported command line interface to do this. But you can accomplish the task by calling an undocumented system service called SYS$SIGPRC(). This system service can deliver any condition value to the target process, not just POSIX signals. Here's the interface described in standard format:
FORMAT
SYS$SIGPRC process-id ,[process-name] ,condition-code
RETURNS
OpenVMS usage: cond_value
type: longword (unsigned)
access: write only
mechanism: by value
ARGUMENTS
process-id
OpenVMS usage: process_id
type: longword (unsigned)
access: modify
mechanism: by reference
Process identifier of the process for which is to receive the signal. The
process-id argument is the address of an unsigned longword containing the
process identifier. If you do not specify process-id, process-name is
used.
The process-id is updated to contain the process identifier actually
used, which may be different from what you originally requested if you
specified process-name.
process-name
OpenVMS usage: process_name
type: character string
access: read only
mechanism: by descriptor
A 1 to 15 character string specifying the name of the process for
which will receive the signal. The process-name argument is the
address of a descriptor pointing to the process name string. The name
must correspond exactly to the name of the process that is to receive
the signal; SYS$SIGPRC does not allow trailing blanks or abbreviations.
If you do not specify process-name, process-id is used. If you specify
neither process-name nor process-id, the caller's process is used.
Also, if you do not specify process-name and you specify zero for
process-id, the caller's process is used.
condition-value
OpenVMS usage: cond_value
type: longword (unsigned)
access: read only
mechanism: by value
OpenVMS 32-bit condition value. The condition-value argument is
an unsigned longword that contains the condition value delivered
to the process as a signal.
CONDITION VALUES RETURNED
SS$_NORMAL The service completed successfully
SS$_NONEXPR Specified process does not exist
SS$_NOPRIV The process does not have the privilege to signal
the specified process
SS$_IVLOGNAM The process name string has a length of 0 or has
more than 15 characters
(plus I suspect there are other possible returns having to do
with various cluster communications issues)
EXAMPLE CODE
#include <stdio.h>
#include <stdlib.h>
#include <ssdef.h>
#include <stsdef.h>
#include <descrip.h>
#include <errnodef.h>
#include <lib$routines.h>
int main (int argc, char *argv[]) {
/*
**
** To build:
**
** $ cc sigusr1
** $ link sigusr1
**
** Run example:
**
** $ sigusr1 := $dev:[dir]sigusr1.exe
** $ sigusr1 20206E53
**
*/
static unsigned int pid;
static unsigned int r0_status;
extern unsigned int sys$sigprc (unsigned int *,
struct dsc$descriptor_s *,
int);
if (argc < 2) {
(void)fprintf (stderr, "Usage: %s PID\n",
argv[0]);
exit (EXIT_SUCCESS);
}
sscanf (argv[1], "%x", &pid);
r0_status = sys$sigprc (&pid, 0, C$_SIGUSR1);
if (!$VMS_STATUS_SUCCESS (r0_status)) {
(void)lib$signal (r0_status);
}
}
Related
Executive summary: I want to use GDB to extract the coverage execution counts stored in memory in my embedded target, and use them to create .gcda files (for feeding to gcov/lcov).
The setup:
I can successfully cross-compile my binary, targeting my specific embedded target - and then execute it under QEMU.
I can also use QEMU's GDB support to debug the binary (i.e. use tar extended-remote localhost:... to attach to the running QEMU GDB server, and fully control the execution of my binary).
Coverage:
Now, to perform "on-target" coverage analysis, I cross-compile with
-fprofile-arcs -ftest-coverage. GCC then emits 64-bit counters to keep track of execution counts of specific code blocks.
Under normal (i.e. host-based, not cross-compiled) execution, when the app finishes __gcov_exit is called - and gathers all these execution counts into .gcdafiles (that gcov then uses to report coverage details).
In my embedded target however, there's no filesystem to speak of - and libgcov basically contains empty stubs for all __gcov_... functions.
Workaround via QEMU/GDB: To address this, and do it in a GCC-version-agnostic way, I could list the coverage-related symbols in my binary via MYPLATFORM-readelf, and grep-out the relevant ones (e.g. __gcov0.Task1_EntryPoint, __gcov0.worker, etc):
$ MYPLATFORM-readelf -s binary | grep __gcov
...
46: 40021498 48 OBJECT LOCAL DEFAULT 4 __gcov0.Task1_EntryPoint
...
I could then use the offsets/sizes reported to automatically create a GDB script - a script that extracts the counters' data via simple memory dumps (from offset, dump length bytes to a local file).
What I don't know (and failed to find any relevant info/tool), is how to convert the resulting pairs of (memory offset,memory data) into .gcda files. If such a tool/script exists, I'd have a portable (platform-agnostic) way to do coverage on any QEMU-supported platform.
Is there such a tool/script?
Any suggestions/pointers would be most appreciated.
UPDATE: I solved this myself, as you can read below - and wrote a blog post about it.
Turned out there was a (much) better way to do what I wanted.
The Linux kernel includes portable GCOV related functionality, that abstracts away the GCC version-specific details by providing this endpoint:
size_t convert_to_gcda(char *buffer, struct gcov_info *info)
So basically, I was able to do on-target coverage via the following steps:
Step 1
I added three slightly modified versions of the linux gcov files to my project: base.c, gcc_4_7.c and gcov.h. I had to replace some linux-isms inside them - like vmalloc,kfree, etc - to make the code portable (and thus, compileable on my embedded platform, which has nothing to do with Linux).
Step 2
I then provided my own __gcov_init...
typedef struct tagGcovInfo {
struct gcov_info *info;
struct tagGcovInfo *next;
} GcovInfo;
GcovInfo *headGcov = NULL;
void __gcov_init(struct gcov_info *info)
{
printf(
"__gcov_init called for %s!\n",
gcov_info_filename(info));
fflush(stdout);
GcovInfo *newHead = malloc(sizeof(GcovInfo));
if (!newHead) {
puts("Out of memory!");
exit(1);
}
newHead->info = info;
newHead->next = headGcov;
headGcov = newHead;
}
...and __gcov_exit:
void __gcov_exit()
{
GcovInfo *tmp = headGcov;
while(tmp) {
char *buffer;
int bytesNeeded = convert_to_gcda(NULL, tmp->info);
buffer = malloc(bytesNeeded);
if (!buffer) {
puts("Out of memory!");
exit(1);
}
convert_to_gcda(buffer, tmp->info);
printf("Emitting %6d bytes for %s\n", bytesNeeded, gcov_info_filename(tmp->info));
free(buffer);
tmp = tmp->next;
}
}
Step 3
Finally, I scripted my GDB (driving QEMU remotely) via this:
$ cat coverage.gdb
tar extended-remote :9976
file bin.debug/fputest
b base.c:88 <================= This breaks on the "Emitting" printf in __gcov_exit
commands 1
silent
set $filename = tmp->info->filename
set $dataBegin = buffer
set $dataEnd = buffer + bytesNeeded
eval "dump binary memory %s 0x%lx 0x%lx", $filename, $dataBegin, $dataEnd
c
end
c
quit
And finally, executed both QEMU and GDB - like this:
$ # In terminal 1:
qemu-system-MYPLATFORM ... -kernel bin.debug/fputest -gdb tcp::9976 -S
$ # In terminal 2:
MYPLATFORM-gdb -x coverage.gdb
...and that's it - I was able to generate the .gcda files in my local filesystem, and then see coverage results over gcov and lcov.
UPDATE: I wrote a blog post showing the process in detail.
I'm working with a custom kernel char device which sometimes returns large negative values (around the thousands, say -2000) for its ioctl().
In userspace, I don't get these values returned from the ioctl call. Instead I get a return value of -1 back with errno set to the negated value from the kernel module (+2000).
As far as I can read and google, __syscall_return() is the macro which is supposed to interpret negative return values as errors. But, it only seems to look for values between -1 and -125. So I didn't expect these large negative values to be translated.
Where are these return values translated? Is it expected behaviour?
I am on Linux 2.6.35.10 with EGLIBC 2.11.3-4+deb6u6.
The translation and move to errno occur on the libc level. Both Gnu libc and μClibc treat negative numbers down to at least -4095 as error conditions, per http://www.makelinux.net/ldd3/chp-6-sect-1
See https://github.molgen.mpg.de/git-mirror/glibc/blob/85b290451e4d3ab460a57f1c5966c5827ca807ca/sysdeps/unix/sysv/linux/aarch64/ioctl.S for the Gnu libc implementation of ioctl.
So, with the help of BRPocock I will report my findings here.
The linux kernel will do a error check for all syscalls along the lines of (from unistd.h):
#define __syscall_return(type, res) \
do { \
if ((unsigned long)(res) >= (unsigned long)(-125)) { \
errno = -(res); \
res = -1; \
} \
return (type) (res); \
} while (0)
Libc will also do an error check for all syscalls along the lines of (from syscall.S):
.text
ENTRY (syscall)
PUSHARGS_6 /* Save register contents. */
_DOARGS_6(44) /* Load arguments. */
movl 20(%esp), %eax /* Load syscall number into %eax. */
ENTER_KERNEL /* Do the system call. */
POPARGS_6 /* Restore register contents. */
cmpl $-4095, %eax /* Check %eax for error. */
jae SYSCALL_ERROR_LABEL /* Jump to error handler if error. */
ret /* Return to caller. */
PSEUDO_END (syscall)
Glibc gives a reason for the 4096 value (from sysdep.h):
/* Linux uses a negative return value to indicate syscall errors,
unlike most Unices, which use the condition codes' carry flag.
Since version 2.1 the return value of a system call might be
negative even if the call succeeded. E.g., the `lseek' system call
might return a large offset. Therefore we must not anymore test
for < 0, but test for a real error by making sure the value in %eax
is a real error number. Linus said he will make sure the no syscall
returns a value in -1 .. -4095 as a valid result so we can savely
test with -4095. */
__syscall_return seems to be missing from newer kernels, I haven't researched that yet.
I was wondering if it is possible to access debug information in a running application that has been compiled with /DEBUG (Pascal and/or C), in order to retrieve information about structures used in the application.
The application can always ask the debugger to do something using SS$_DEBUG. If you send a list of commands that end with GO then the application will continue running after the debugger does its thing. I've used it to dump a bunch of structures formatted neatly without bothering to write the code.
ANALYZE/IMAGE can be used to examine the debugger data in the image file without running the application.
Although you may not see the nice debugger information, you can always look into a running program's data with ANALYZE/SYSTEM .. SET PROCESS ... EXAMINE ....
The SDA SEARCH command may come in handy to 'find' recognizable morcels of date, like a record that you know the program must have read.
Also check out FORMAT/TYPE=block-type, but to make use of data you'll have to compile your structures into .STB files.
When using SDA, you may want to try run the program yourself interactively in an other session to get sample sample addresses to work from.... easier than a link map!
If you programs use RMS a bunch (mine always do :-), then SDA> SHOW PROC/RMS=(FAB,RAB) may give handy addresses for record and key buffers, allthough those may also we managed by the RTL's and thus not be meaningful to you.
Too long for a comment ...
As far as I know, structure information about elements is not in the global symbol table.
What I did, on Linux, but that should work on VMS/ELF files as well:
$ cat tests.c
struct {
int ii;
short ss;
float ff;
char cc;
double dd;
char bb:1;
void *pp;
} theStruct;
...
$ cc -g -c tests.c
$ ../extruct/extruct
-e-insarg, supply an ELF object file.
Usage: ../extruct/extruct [OPTION]... elf-file variable
Display offset and size of members of the named struct/union variable
extracted from the dwarf info in the elf file.
Options are:
-b bit offsets and bit sizes for all members
-lLEVEL display level for nested structures
-n only the member names
-t print base types
$ ../extruct/extruct -t ./tests.o theStruct
size of theStruct: 0x20
offset size type name
0x0000 0x0004 int ii
0x0004 0x0002 short int ss
0x0008 0x0004 float ff
0x000c 0x0001 char cc
0x0010 0x0008 double dd
0x0018 0x0001 char bb:1
0x001c 0x0004 pp
$
I'd like to try using valgrind to do some heap corruption detection. With the following corruption "unit test":
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
int main()
{
char * c = (char *) malloc(10) ;
memset( c, 0xAB, 20 ) ;
printf("not aborted\n") ;
return 0 ;
}
I was suprised to find that valgrind doesn't abort on error, but just produces a message:
valgrind -q --leak-check=no a.out
==11097== Invalid write of size 4
==11097== at 0x40061F: main (in /home/hotellnx94/peeterj/tmp/a.out)
==11097== Address 0x51c6048 is 8 bytes inside a block of size 10 alloc'd
==11097== at 0x4A2058F: malloc (vg_replace_malloc.c:236)
==11097== by 0x400609: main (in /home/hotellnx94/peeterj/tmp/a.out)
...
not aborted
I don't see a valgrind option to abort on error (like gnu-libc's mcheck does, but I can't use mcheck because it isn't thread safe). Does anybody know if that is possible (our code dup2's stdout to /dev/null since it runs as a daemon, so a report isn't useful and I'd rather catch the culprit in the act or closer to it).
There is no such option in valgrind.
Consider adding a non-daemon mode (debug mode) into your daemon.
http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs 4.6 explains some requests from debugged program to valgrind+memcheck, so you can use some of this in your daemon to do some checks at fixed code positions.
I have some issues with character encoding of a binary value using Qt4 and MySQL5.
Let's say we want to bind a value containing the four bytes \xDE \xAD \xBE \xEF. I check the bound value using the MySQL function HEX() using this code:
#include <QtGui/QApplication>
#include <QDebug>
#include <QSqlQuery>
#include <QVariant>
#include <QSqlRecord>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL");
if(!db.open("test", "test"))
exit(1);
QSqlQuery q("SELECT HEX(?)");
q.addBindValue(QVariant(QByteArray::fromHex("DEADBEEF")));
if(!q.exec())
exit(1);
if(!q.next())
exit(1);
qDebug() << q.record().value(0).toString();
return a.exec();
}
The output of this code is "DEADEFBFBDEFBFBD" which is obviously the HEX code of \xDE \xAD \xBE \xEF interpreted as a latin1-encoded string and then encoded as a UTF8 string.
If I do not bind the value using addBindValue() but placing it directly into the query using UNHEX('DEADBEEF') results in the expected behaviour (which isn't surprising...).
Where does the UTF8 encoding step take place?
(Finally, I want to store a binary value "1:1" into a BLOB field.)
OS: Ubuntu 10.10 (32 bit)
Qt Version: 4.7.0 (Ubuntu package)
MySQL Version: 5.1.49-1ubuntu8.1
Thanks in advance!
After weeks of trial and error, the only solution I found is to transfer the binary data in hexadecimal code and UNHEX() it in the query.
This is a solution, so I allow myself to accept my own answer, but it isn't a nice solution and I have no explanation to the behavior of the code above.
So if you have any advice, I am looking forward for any further answer. (I will then accept your answer.) Thanks!