How do I use the vxWorks debug agent to perform pre-kernel debugging? - vxworks

The vxWorks documentation states:
The WDB agent itself is independent of the target operating system: it
attaches to run-time OS services through a virtual-function run-time
interface. The WDB agent can execute before VxWorks is running (as in
the early stages of porting a BSP to a new board)."
How can I use the debug agent before the vxWorks kernel is running?

First, in order to be able to use the agent to perform pre-kernel debugging, you must have a serial port available for debugging. This serial port has to be initialized and functional as it will be the debug channel.
There is a limitation on how early you can start debugging. WDB based debugging will start after the first hardware initialization function runs (sysHwInit) and before the kernel initialization proper (kernelInit).
Depending on the version of vxWorks being used, there are different ways to achieve this result.
Workbench-based vxWorks builds
In the kernel configuration tool, you must select the following components:
WDB serial connection
WDB system debugging
WDB pre kernel system initialization
Depending on the order you select components, you might get complaints from workbench because some components are mutually exclusive (you can't have WDB END driver with pre-kernel debugging). The order above should be ok.
Command-line builds
Edit the config.h file, and select the following options:
#define WDB_INIT WDB_PRE_KERNEL_INIT
#define WDB_COMM_TYPE WDB_COMM_SERIAL
#define WDB_MODE WDB_MODE_SYSTEM
When vxWorks is compiled with those options, it will perform perform the first phase of hardware initialization and then suspend, waiting for the debug agent running on the host to connect to the target.
At that point, you can perform debugging, single step, etc...

Related

how do i build openthread stack for thread leader role?

I am new to openthread. I am trying to build thread leader and end devices.
End devices should not have routing capability. I built the thread stack for nxp target with Border_ROUTER=1. Under the output directory there are 4 binaries (ot-cli-ftd ot-cli-mtd ot-ncp-ftd ot-ncp-mtd ot-ncp-radio). I would like to know which binary can be placed on thread leader and end device .
procedure followed:
./configure --enable-commissioner
make
make -f examples/Makefile-kw41z BORDER_ROUTER=1
If my procedure is wrong (I'm pretty sure it is) how do I build for thread leader and end device? What are switches to be used when I make?
All Thread Routers support the Leader role. The Full Thread Device (FTD) builds support the Router and Leader roles. The FTD binaries are generated using the default build configuration - no need to specify any additional build parameters.

AttachNotSupportedException when trying to start a JFR recording

I'm receiving AttachNotSupportedException when trying to start a JFR recording.
It was working normally, until now.
jcmd 3658 JFR.start maxsize=100M filename=jfr_1.jfr dumponexit=true settings=profile
Output:
3658:
com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
What might be happening?
SO: Oracle Linux Server release 6.7
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
One of the probable reasons is that /tmp/.java_pid1234 file has been deleted (where 1234 is PID of a Java process).
Tools that depend on Dynamic Attach Mechanism (jstack, jmap, jcmd, jinfo) communicate to JVM through a UNIX domain socket created at /tmp.
This socket is created by JVM lazily on the first attach attempt or eagerly at JVM initialization if -XX:+StartAttachListener flag is specified.
Once the file corresponding to the socket is deleted, tools cannot connect to the target process, and unfortunately there is no way to re-create communication socket without restarting JVM.
For the description of Dynamic Attach Mechanism see this answer.
With personal experience... This problem also occurs in scenarios where the development environment is divided into partitions, and the partition where the operating system is located is different from the operating system partition. Example, operating system partition is EXT4 and the development environment partition is NTFS (where the JVM is). Problem occurs because you can not create a file "/tmp/.java_pid6024" (where 6024 is the PID of the java process).
To troubleshoot add -XX: + StartAttachListener at the start of the JVM, or application server.
Another possibility: your app is running under systemd with 'PrivateTmp=yes'. This prevents the /tmp/.java_pid1234 file from being found.

Jenkins multiconfiguration project handle concurrent device usage

Case
I have a Jenkins slave witch run's calabash tests on mobile devices (android, ios). To separate on which machines (the mac for iOS or Linux for Android) the tests is run, I also use the Throttle Concurrent Builds Plug-in. This way I separate between the Android or Mac Jenkins slaves the devices are hooked to.
I use a mapping table and a self written bash script to call a device by name and execute a test on this specific slave. The mapping table map's the name to the device id (or IP for iOS).
The architecture is as follows:
[Master]--(Slave-iOS)---------iPhone6
| |--------------iPhone5
|
|--------(Slave-Android)-----HTCOne
|--------------Nexus
|--------------G4
To hand over the device to the bash script I use the Jenkins Matrix Project Plugin, which lets me create a list of devices and test cases like:
HTCOne Nexus G4
Run x x x
Delete x x x
CreateUser x x x
Sadly this list can only be executed sequentially. Now I also want to build tests on multiple devices in parallel and cross vice versa.
Question
I search for a Jenkins plugin which handles devices allocation. If one trigger needs a specific device it should wait until this one is accessible and the test can be executed. The plugin should integrate with the shell execution in Jenkins.
A big plus would be, if it can be combined with the Matrix Project Plugin!
What I looked into so far:
Exclusion-Plugin,
Throttle Concurrent Builds Plug-in, [used to specifiy the slave]
Locks and Latches plugin,
For all the listed ones so far, I don't know how to link them to the matrix configuration and get a device dynamically. I also don't know
how to get the locked resource information into my script.
Port Allocator Plugin, not tested but seems to have the same problem
External Resource Dispatcher, seem to allocate only one resource and is not finding anything if it is a matrix configuration.
Related questions I found, which helped but didn't solved the problem:
How to prevent certain Jenkins jobs from running simultaneously?
Jenkins: group jobs and limit build processors for this group
Jenkins to not allow the same job to run concurrently on the same node?
How do I ensure that only one of a certain category of job runs at once in Hudson?
Disable Jenkins Job from Another Job
If Throttle Concurrent Builds Plugin doesn't work as required in your multi-configuration project, try
Exclusion Plugin with a dynamic resource name, like: SEMAPHORE_MATRIX_${NODE_NAME}
Then add a Build step "Critical block start" (and an optional "Critical block end" step), which will hold this build block execution until SEMAPHORE_MATRIX_${NODE_NAME} is not in use on any other job, including the current Matrix child jobs.
(... Build steps to be run only when SEMAPHORE_MATRIX_${NODE_NAME} is available ...)

OpenDBX odbx_init blocks with gdb (eclipse)

I am testing OpenDBX to connect to MSSQL server for a project on Ubuntu Linux.
I am using C/C++ and eclipse CDT IDE.
I built a simple test app from the OpenDBX Web page (below without error testing shown).
odbx_init( &handle, "mssql", "172.16.232.60", "" );
odbx_bind( handle, "testdb", "testuser", "testpwd", ODBX_BIND_SIMPLE );
odbx_finish( handle );
Problem:
When I run the code from shell or Run->Run I see connection established with server (wireshark).
When I attempt to run from with eclipse debugger the application blocks on odbx_init(...) and I see nothing go out on wireshark (SYN/ACK).
I have gdb setup as sudo, (how to debug application as root in eclipse in Ubuntu?)
I also use this same platform and setup to access network with sockets with other applications we are developing.
Any ideas on why odbx_init might be blocking from debugger?
One last bit of information to add. The issue does not occur when using the C++ API. Only the C API presents the issue described.
One last bit of information to add.
The issue does not occur when using the C++ API.
Only the C API presents the issue described.
I found a "work-around". Apparently the dynamic load of the library fails when in the eclipse GDB debug mode. To work around this at beginning of main I explicitly load the library and then close it immediately. This puts the library in memory so when the calls to the OpenDBX API are made the library is already resident. Not sure about all the low level details but this allows me to debug OpenDBX in eclipse. If anyone has a better explanation or fix/work-around please let me know. Here is the workaround code at beginning of main():
void *lib_handle_mssql;
lib_handle_mssql = dlopen("/usr/lib/opendbx/libmssqlbackend.so",RTLD_NOW);
if(!lib_handle_mssql)
{
// Bad, Bad, Bad...
printf("%s\n",dlerror());
exit(EXIT_FAILURE);
}
dlclose(lib_handle_mssql);
// Can now debug in eclipse IDE.

How are the vxWorks "kernel shell" and "host shell" different?

In the vxWorks RTOS, there is a shell that allows you to issue command to your embedded system.
The documentation refers to kernel shell, host shell and target shell. What is the difference between the three?
The target shell and kernel shell are the same. They refer to a shell that runs on the target. You can connect to the shell using either a serial port, or a telnet session.
A task runs on the target and parses all the commands received and acts on them, outputting data back to the port.
The host shell is a process that runs on the development station. It communicates with the debug agent on the target. All the commands are actually parsed on the host and only simplified requests are sent to the target agent:
Read/Write Memory
Set/Remove Breakpoints
Create/Delete/Suspend/Resume Tasks
Invoke a function
This results in less real-time impact to the target.
Both shells allow the user to perform low level debugging (dissassembly, breakpoints, etc..) and invoke functions on the target.
There are some differences between host shell and target shell, you can use h command to get the actual commands the two shell support.
The host shell support more command line edit functions like auto complement and symbol lookup etc.