JVM Terminates when Receiving POSIX Signal - jvm

I have a native C library that I load from Java using the JNA library. That native C library has uses some POSIX signal hander under that hood that responds to SIGRTMIN +3 and possibly other SIGRTMIN + x.
When I run the java program from the terminal in Linux (64 bit Ubuntu), the program terminates and on the shell screen I see the text "Real-time Signal 3". It seems that the library sends a signal SIGRTMIN +3 to its own process, however it looks like it terminates the JVM.
I can see how this could be the case as I believe the default disposition of an application that receives SIGRTMIN + x without an associated handler should terminate the application. However, in this case, the library WANTS to handle the signal, but it seems that the JVM terminates upon its reception.
1) I have written a quick and dirty application that just sends SIGRTMIN + 3 (and other SIGRTMIN + x) to the process after launching the application from the shell and have confirmed that it DOES TERMINATE the java application.
2) I have read a lot of documentation on how java provides "signal-chaining" and have tried to launch the java application after setting libjsig.so using export LD_PRELOAD="...\libjsig.so " but have not been able to keep the application from terminating.
3) From what I have read, it does not seem like the JVM uses SIGRTMIN + 3 for its internal purposes, but there doesn't seem like much documentation in the way of how to suggest to the JVM to just chain a SIGRTMIN and not just terminate.
Is there a way that I can instruct the JVM not terminate and allow the native library to handle the signal?

Related

App crashes when switching applications

When you switch applications or by pressing the Windows button application terminates. But on the first page of this application does not occur. In debug mode, this also happens. The application consumes 28 megabytes of memory. So the system should not terminate it.
Using Prism for windows phone and Unity 3.5 prerelease.
The crash occurs cause GetNavigationState doesn't support serialization of a parameter type which was passed to Frame.Navigate.

Disable logging in vlc

I am writing a program to view an MJPEG stream with vlc. When running vlc directly through the command line I get the error message [mjpeg # 0x10203ea00] No JPEG data found in image over and over again (with different pid's). I would like to get rid of this, as I think all of that text output is bogging down my program (and makes my text output impossible to see as it is gone about .5 seconds after it is written to console)
I am connecting to blue iris, and am implementing my program with vlcj.
http://10.10.80.39:8080/mjpg/cam1/video.mjpeg I have tried all of the quiet options that I can find, set verbosity to 0, I am at a loss on how to ignore this error.
I am running vlc 2.1. The error happens on multiple computers, and multiple os's.
You simply can't disable everything that vlc, or the libraries that vlc depends on, may emit. Not all of the log/error messages you see can be controlled by setting vlc's log level.
For me the problem is mainly libdvdnav spewing irrelevant messages to stderr.
You say you're using vlcj, well I too wanted a way to easily ignore those error messages from inside my Java applications. With the latest vlcj-git (at time of writing), there is an experimental NativeStreams [1] class that might help you.
This class uses JNA to wrap the "C" standard library and programmatically redirect either or both of the native process stdout and stderr streams.
You can not simply redirect System.out and System.err as some might expect because clearly these messages are coming from the native code outside of the JVM and that of course does not use System.out or System.err.
You can redirect to log files (which could just keep growing), or to "/dev/null".
The downside is that if you redirect the native streams, you also inevitably redirect the corresponding Java stream - you would lose your own application output. In my own applications that's not a problem because I log to stdout (which I don't redirect), whereas the vlc messsages I don't want fortuitously go to stderr (which I redirect).
You could also just redirect your java process output streams in the usual way when you launch the JVM. I wanted to be able to do this programmatically rather than having to write a shell script.
So it's not an ideal solution, but it works for me (only tested on Linux).
[1] https://github.com/caprica/vlcj/blob/a95682d5cd0fd8ac1d4d9b7a768f4b5600c87f62/src/main/java/uk/co/caprica/vlcj/runtime/streams/NativeStreams.java

CoCreateInstance takes a lot of time

After registering(RegAsm) my C# COM visible class, I see that CoCreateInstance(__uuidof(myclass)) takes a lot of time only for the first time, subsequent attempts in the same client process are resolved instantly. Any idea why is it taking time?
NGen is not an option for me.
My COM Server is in C# and client is in MFC/ATL
CComPtr<namespace::Imyclass> obj;
hrx = obj.CoCreateInstance(__uuidof(namespace::myclass));
The first call to CoCreateInstance has to load into the process, and initialize, the .NET runtime. Then, your DLL has to be loaded, "verified", and compiled into machine code (although just-in-time helps a lot to speed-up the startup). The .NET runtime also has to parse the metadata of your assembly, and then dynamically generate and compile the "COM callable wrappers" (http://msdn.microsoft.com/en-us/library/f07c8z1c.aspx) which are the proxies that bridge between the unmanaged COM world and the managed .NET runtime. Any additional libraries your code might use also needs to be loaded, verified and possibly compiled into machine code (if not NGEN'd).
This is inherently an expensive process. The delays you mention are not unheard of.
I don't believe there is much you can do to speed things up. I suggest you think about whether you can take the hit early in your program's lifetime by creating an object soon after startup. It won't make it faster, but it might improve the user experience dramatically. if your program just can't tolerate the delays, then you should not use .NET to write the COM object (more specifically, you should not use .NET in your process at all. This is not an issue with using COM; it's an issue with loading .NET)
Incidentally, this is one of the reasons why writing shell extensions in .NET is... "highly discouraged". See this recent post on this subject, which touches on the startup performance of .NET as well: http://blogs.msdn.com/b/oldnewthing/archive/2013/02/22/10396079.aspx
(That's why I asked earlier what kind of client you were running. A client that already runs .NET managed code depends on the .NET runtime and would not be affected by these delays)
The first call to CoCreateInstance is likely having to consult the registry and the file system, loading the appropriate code, allowing it to initialize, and finally invoke a factory to create the instance you've asked for (if it could be located).
The second call benefits hugely from these previous steps. If the first call was successful, then the code is already loaded and initialized, and all it has to do is invoke a factory a second time.
If this delay is only for first load (after computer start), then this is caused by loading all libraries. First delay after start (or after long time without usage of .NET) will always be slow. (See Micelli answer)
Delay can be also caused on each loading. Today I found out that internet connection can also cause the delay.
Measured values:
No Ethernet and no WiFi connection delay: 94 ms (Win7) / 1.5 (WinXP)
Windows XP: Internet connection (with proxy and non standard gateway; not all ports are allowed): 4-5 s (WinXP)
Connected to Ethernet not not to Internet: 10 s (Win7)
Short after the connection to Ethernet (Windows test Internet connection; blue circle on the Network Icon): 30 s (Win7)
Internet connection (with proxy and non standard gateway; only few ports are allowed): 30 s (Win7)
*non standard gateway: allowed TCP ports and connections are different for each computer (Different for WinXP and Win7).
Tested on Windows 7 (x64) and WinXP. I tested it because it came as a complain from a customer and located the delay to CoCreateInstance The loaded library is in c# and is signed (signed assembly with snk and with standard certificate for signing executable files).
See also: http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cda45e39-ed11-4a17-a922-e47aa2e7b325/ce-40-delay-when-cocreateinstance-on-pc-without-internet?forum=sqlce

How to monitor a process on OS X?

I am looking for a way to monitor the state of one of my applications on OS X. There's a number of components that I need to monitor such as the status of various communication channels. If they go down, the monitoring process should be able to warn the user both on screen and via a push notification.
XPC services look promising, but if the app crashes, I presume this will take out the service as well, or am I mistaken?
My preferred solution is something which would also monitor for unexpected termination, and restart the app if it happens.
What is the best way to do this?
I think monitoring communication channels, etc. must be done by the each specific components (processes). And if the unexpected error occur that component should exit immediately to ensure proper cleanup.
For processe monitoring, below Apple Technical Q&A document will be really helpful:
Technical Note TN2050: Observing Process Lifetimes Without Polling
You could write an app which starts your main application as a child process, and waits for it to exit. It could check the exit code, and then react according to your needs.
This approach is explained here: https://stackoverflow.com/a/78095/785411
To fork() some monitoring process to run your main application as a child process, this is explained here: https://stackoverflow.com/a/4327062/785411
I think you could possibly make use of the built in facilities Launchd and CrashReporter to achieve your requirements.
Launchd is the OS X system supervisor intended for launching and monitoring background processes, and would be typically used to run XPC services. Launchd agents can react to various system events, and can be configured to restart processes in the event of them crashing ( specified via the KeepAlive/SuccessfulExit key in the property list)
Launchd can be set to react to various system events as launch event, including monitoring files and directories, scheduled times, or listening to network connections.
CrashReporter is the OS X system facility that catches and logs all process crashes. It logs through the AppleSystemLogger facility and can be accessed with the syslog tools as documented in the linked TechNote. On Mountain Lion, user process crash reports end up in ~/Library/DiagnosticReports/ , with a crashlog and plist file pair created per crash event.
I think you could use these features in a couple of ways to achieve your requirement, if launchd is responsible for running the xpc services, it can take reponsibility for restarting them on crash events, and they can be dissociated from any app crashes.
You could write a launchd agent that responds to for crash events by montioring the crash report directory (e.g. using the QueueDirectories property) for new logs and re-launches your applicaton, or presents notifications.
If each process runs in its own thread you could run a watchdog program that monitors whether the threads are alive. A script that runs ps in a loop and parses the output could do it.
You can see the various options here. See for example -C to select by command name, and -m to show all threads.

Application behaves differently when output is redirected to an NSPipe object?

I have an application which works with sockets and reads / writes data. It uses Foundation framework combined with CFNetwork and stdio.
Here is the issue - when it is launched from console (bash shell) it works 100% fine and there is nothing wrong with. However when it is invoked by another application via NSTask madness begins. The whole application goes insane and it only reads the socket once and then hangs up (it is meant to exit after it is done).
This application does not rely on environmental variables or any other things like that. It is not a user issue either. When it is launched it sends a simple request to the server and 'printf's the response and reads again. This happens untill a termination packet is recieved.
I am really confused, and it feels like there is something inside the framework which makes the app insane just to piss the programmer off.
By the way, I'm on Mac OS X Snow Leopard and the application is for the same platform.
EDIT 1 : Redirecting stdout to an NSPipe causes it. But why ?
libc treats a pipe/file and a console connected to a (pseudo) terminal differently. In particular, the default buffering policy is different. See the extensive discussion in this Stack Overflow Q&A.
So, it's perfectly conceivable that a program which works when connected to a (pseudo) terminal won't work with a pipe. If you need more specific advice, you need to post (at least the skeleton of) your code.