OpenJDK debug with printf? - jvm

I am hacking OpenJDK7 to implement an algorithm. In the process of doing this, I need to output debug information to the stdout. As I can see in the code base, all printings are done by using outputStream*->print_cr(). I wonder why printf() was not used at all?
Part of the reasons why I'm asking this because I in fact used a lot of printf() calls. And I have been seeing weird bugs such as random memory corruption and random JVM crashing. Is there any chance that my printf() is the root cause? (Assume that the logic of my code is bug-free of course)

why printf() was not used at all?
Instead of using stdio directly, HotSpot utilizes its own printing and logging framework. This extra abstraction layer provides the following benefits:
Allows printing not only to stdout but to an arbitrary stream. Different JVM parts may log to separate streams (e.g. a dedicated stream for GC logs).
Has its own implementation of formatting and buffering that does not allocate memory or use global locks.
Gives control over all output emitted by JVM. For example, all output can be easily supplemented with timestamps.
Facilitates porting to different platforms and environments.
The framework is further improved in JDK 9 to support JEP 158: Unified JVM Logging.
Is there any chance that my printf() is the root cause?
No, unless printf is misused: e.g. arguments do not match format specifiers, or printf is called inside a signal handler. Otherwise it is safe to use printf for debugging. I did so many times when worked on HotSpot.

Related

What happens if an MPI process crashes?

I am evaluating different multiprocessing libraries for a fault tolerant application. I basically need any process to be allowed to crash without stopping the whole application.
I can do it using the fork() system call. The limit here is that the process can be created on the same machine, only.
Can I do the same with MPI? If a process created with MPI crashes, can the parent process keep running and eventually create a new process?
Is there any alternative (possibly multiplatform and open source) library to get the same result?
As reported here, MPI 4.0 will have support for fault tolerance.
If you want collectives, you're going to have to wait for MPI-3.something (as High Performance Mark and Hristo Illev suggest)
If you can live with point-to-point, and you are a patient person willing to raise a bunch of bug reports against your MPI implementation, you can try the following:
disable the default MPI error handler
carefully check every single return code from your MPI programs
keep track in your application which ranks are up and which are down. Oh, and when they go down they can never get back. but you're unable to use collectives anyway (see my opening statement), so that's not a huge deal, right?
Here's an old paper (back when Bill still worked at Argonne. I think it's from 2003):
http://www.mcs.anl.gov/~lusk/papers/fault-tolerance.pdf . It lays out the kinds of fault tolerant things one can do in MPI. Perhaps such a "constrained MPI" might still work for your needs.
If you're willing to go for something research quality, there's two implementations of a potential fault tolerance chapter for a future version of MPI (MPI-4?). The proposal is called User Level Failure Mitigation. There's an experimental version in MPICH 3.2a2 and a branch of Open MPI that also provides the interfaces. Both are far from production quality, but you're welcome to try them out. Just know that since this isn't in the MPI Standard, the function prefixes are not MPI_*. For MPICH, they're MPIX_*, for the Open MPI branch, they're OMPI_* (though I believe they'll be changing theirs to be MPIX_* soon as well.
As Rob Latham mentioned, there will be lots of work you'll need to do within your app to handle failures, though you don't necessarily have to check all of your return codes. You can/should use MPI error handlers as a callback function to simplify things. There's information/examples in the spec available along with the Open MPI branch.

NSTask vs System - pros and cons?

I'm at a point in a project where I need to call system commands. I originally started looking at NSTask (as that seems to be the most popular approach) but recently i just came across the system command. It looks like a far easier setup that NSTask. I've seen some questions/answers that say NSTask is the better approach, but I don't see
What are the advantages/disadvantages between the two
In what cases would one more likely to be used than the other
Any help/links/thoughts/ideas? (and yes.. i did a google search)
NSTask:
Can run his task in the background. Allows you to send interrupts and kills to the underlying process, and allows you to suspend or resume the underlying process without setting up threads yourself. Can also run synchronously if that's what you want.
Let's you work back and forth with Cocoa classes, like NSStrings without having to do a buncha conversions.
Let's you set I/O streams for the underlying process that differ from the caller's.
Better supported across all Apple platforms (like iOS) than system(3) -- I don't think system even works on iOS.
Requires Cocoa and Objective-C.
Doesn't interpret shell arguments or do path expansions of arguments.
system(3):
Better supported across all Unix-like platforms.
Can run a task with a one-liner.
Only requires C.
Runs in a shell and will interpret working directory and arguments like /bin/sh would.
For a Cocoa app I always use NSTask; I only use system if I'm doing something that must be C-only or I know will have to run under non-Mac environments. As it is, system is pretty brittle and the more robust solution is doing a fork-exec, because it allows you more control over streams and concurrent operation.
There are some differences. For some of them it is probably har to say in general, whether it is an advantage or not.
system() starts a shell. NSTask don't.
system() blocks. NSTask run asynchronously.
system() only takes args. NSTask works with pipes.
system() has only an integer exit code. NSTask works with pipes. (Yes, mentioned again. This is for output.)
system() takes a complete command line. To NSTask args can be passed in an array.
system() runs on the current directory. To NSTask you can pass a working directory.
This are some differences out of my mind without rechecking the documentation. It is an overview.

sbcl vs clisp: USOCKET:TIMEOUT-ERROR. Do the two implementations access USOCKET differently?

I have a script that uses quicklisp to load zs3 for accessing Amazon's S3.
When I run the script with clisp, when (zs3:bucket-exists-p "Test") is run, USOCKET:TIMEOUT-ERROR occurs.
However, when I run it with sbcl, it runs properly.
Do they access usocket differently?
What are the pros and cons of each?
usocket is a compatibility layer which hides the underlying socket API of each Lisp implementation. There is bound to be an impedance mismatch in some cases, but for the most part it should just work.
I suspect zs3 is not often used with CLISP (or perhaps not at all!), and you're seeing the result of that. On the other hand one can generally expect libraries to be well-tested under SBCL since that is the most popular implementation.
Note also that threads are still experimental in CLISP; they are not enabled by default. The fact that sockets are often mixed with threads only decreases the relative use of CLISP + usocket.

Simulating multiple instances of an embedded processor

I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.

how to debug SIGSEGV in jvm GCTaskThread

My application is experiencing cashes in production.
The crash dump indicates a SIGSEGV has occurred in GCTaskThread
It uses JNI, so there might be some source for memory corruption, although I can't be sure.
How can I debug this problem - I though of doing -XX:OnError... but i am not sure what will help me debug this.
Also, can some of you give a concrete example on how JNI code can crash GC with SIGSEGV
EDIT:
OS:SUSE Linux Enterprise Server 10 (x86_64)
vm_info: Java HotSpot(TM) 64-Bit Server VM (11.0-b15) for linux-amd64 JRE (1.6.0_10-b33), built on Sep 26 2008 01:10:29 by "java_re" with gcc 3.2.2 (SuSE Linux)
EDIT:
The issue stop occurring after we disable the hyper threading, any thoughts?
Errors in JNI code can occur in several ways:
The program crashes during execution of a native method (most common).
The program crashes some time after returning from the native method, often during GC (not so common).
Bad JNI code causes deadlocks shortly after returning from a native method (occasional).
If you think that you have a problem with the interaction between user-written native code and the JVM (that is, a JNI problem), you can run diagnostics that help you check the JNI transitions. to invoke these diagnostics; specify the -Xcheck:jni option when you start up the JVM.
The -Xcheck:jni option activates a set of wrapper functions around the JNI functions. The wrapper functions perform checks on the incoming parameters. These checks include:
Whether the call and the call that initialized JNI are on the same thread.
Whether the object parameters are valid objects.
Whether local or global references refer to valid objects.
Whether the type of a field matches the Get<Type>Field or Set<Type>Field call.
Whether static and nonstatic field IDs are valid.
Whether strings are valid and non-null.
Whether array elements are non-null.
The types on array elements.
Pls read the following links
http://publib.boulder.ibm.com/infocenter/javasdk/v5r0/index.jsp?topic=/com.ibm.java.doc.diagnostics.50/html/jni_debug.html
http://www.oracle.com/technetwork/java/javase/clopts-139448.html#gbmtq
Use valgrind. This sounds like a memory corruption. The output will be verbose but try to isolate the report to the JNI library if its possible.
Since the faulty thread seems to be GCTaskThread, did you try enabling verbose:gc and analyzing the output (preferably using a graphical tool like samurai, etc.)? Are you able to isolate a specific lib after examining the hs_err file?
Also, can you please provide more information on what causes the issue and if it is easily reproducible?