Forcably Terminating DirectShow - webcam

I have an application that captures video and images from webcams. Normally it works well and reliably using the DirectShow.net wrapper.Stopping the graph often leads to deadlocks however. It uses a number of filters including the sampleGrabber filter and vendor supplied filters (which we cannot edit or replace). Normal mediaControl methods to stop the graph does not work. Because we cannot edit the vendor filters, we cannot remove the deadlock and free up the cameras. Terminating the application fixes the problem and frees up the cameras. Is there any way to terminate the DirectShow thread(s) without terminating the parent application?
My application is in c# but if you have a c++ answer I will accept it and port it.

If it is frozen, the only way to break the deadlock (esp. to release lock on exclusive resource such as camera) is to kill the entire process.

Related

Programmable "real-time" MIDI processing

In my band, all musicians have both hands busy at any time. However, we want to add whole synthesizer chords (1/4 .. whole note length), maybe triggered by a simple foot switch every time (because playing along a sequencer is currently too difficult for us).
Some time ago I wrote a (Windows) console application in C (MinGW) that converted incoming MIDI events to text, piped that text to an external program (AWK script), and re-converted that external program's text output back to MIDI events.
Basically every sort of filtering or event generation was possible; I actually produced chords triggered by simple control messages; I kept note-ONs in memory to be able to -OFF them whenever a new chord was sent, etc. - the actual processing (execution) times were not a problem at all(!)
But I had to understand that not only latency, but also the notoriously unreliable (with respect to "when", "for how long") user application OS multitasking/switching made this concept practically worthless at least for "real-time" use. There were always clearly perceivable delays, of unpredictable duration.
I read about user-mode driver programming and downloaded some resources, but somehow stopped working on that project without a real result.
Apart from that specific project, I even have some experience in writing small "virtual" machines that allow for expressing exactly the variables, conditionals and math, stored as a token tree and processed quite fast. Maybe there is also the option to embed Lua, V8, or anything like that. So calling another (external) program is not necessarily the issue here, since that can be avoided.
The problem that remains is that the processing as a whole is still done by a (user) application. So I figure there is no way around a (user mode) driver, in this scenario.
Alternatively, I was even considering (more "real-time") hardware - a Raspi or the like - but then the MIDI interface may be an additional challenge.
Is there any hardware or software solution (or project) available that may serve as a base for such a _Generic MIDI filter/processor_? Apart from predictable timing behaviour, it is desirable not to need a (C) compilation environment when building filters/rules, since that "creative" step will probably happen in our rehearsal room (laptop available), which is certainly not a "programming lab". Text-based "programs" are fine - for long-term I'll maybe build a GUI for wiring/generating rules anyway.
MIDI is handled pretty well in Windows. I'm not sure the source of the original problems you had. No doubt there is some latency though.
You can handle this in real time with a microcontroller. The good news is that you don't even have to build the hardware. Off-the-shelf controllers are available for this. For example: http://www.midisolutions.com/prodevp.htm

Should I expect "device lost" conditions as normal under Vulkan?

I'm asking because I wonder how robust I should make my programs against device losses.
Should I only expect devices to be lost in the case of, say, hardware errors, driver bugs, improper API usage or non-terminating shader programs; or should I also expect device loss in such cases as, say, suspending and resuming my laptop, minimizing the application window, or just randomly because the implementation felt like it?
It's unfortunately going to vary by GPU, driver, and OS, which leads to the somewhat vague spec wording that krOoze quoted:
A logical device may become lost because of hardware errors, execution timeouts, power management events and/or platform-specific events.
For reference, there is nothing in the Android OS itself that would require a device lost -- e.g. it doesn't force a device-lost when an app goes into the background or the screen is turned off.
But it's likely that some driver/hardware combinations will report a device lost error if there is a GPU exception (or reset), unless the driver can guarantee that nothing from your VkDevice could have been affected. That's a surprisingly difficult guarantee to make, e.g. if your queues weren't running at the time the problem occurred, but there still might have been some of your data in dirty cache lines and the reset invalidates those lines instead of writing them back to memory, your data will be corrupted. An exception/reset can be caused by hardware or driver bugs, or by any app on the system hitting a watchdog timeout (infinite loop in shader is the easy example, but even making progress but simply taking too long can happen).
In practice, these should be fairly rare events, and I believe (without data) that these days it's primarily caused by hotplug (rare) or misbehaving hardware/driver/app events rather than more routine things like device sleep.
Since testing your recovery code is going to be difficult and it'll therefore likely be buggy, my recommendation would be to just do something heavy-handed but simple, like saving application state and either restarting your app automatically, or quitting and asking the user to restart. Depending on what you're building, it might be reasonable to do something more sophisticated like tearing down and restarting+restoring your renderer system without taking down the rest of your app.

System call without context switching?

I was just reading up on how linux works in my OS-book when I came across this..
[...] the kernel is created as a single, monolitic binary. The main reason is to improve performance. Because all kernel code and data structures are kept in a single address space, no context switches are necessary when a process calls an operating-system function or when a hardware interrup is delivered.
That sounded quite amazing to me, surely it must store the process's context before running off into kernel mode to handle an interrupt.. But ok, I'll buy it for now. A few pages on, while describing a process's scheduling context, it said:
Both system calls and interrups that occur while the process is executing will use this stack.
"this stack" being the place where the kernel stores the process's registers and such.
Isn't this a direct contradiction to the first quote? Am I missinterpreting it somehow?
I think the first quote is referring to the differences between a monolithic kernel and a microkernel.
Linux being monolithic, all its kernel components (device drivers, scheduler, VM manager) run at ring 0. Therefore, no context switch is necessary when performing system calls and handling interrupts.
Contrast microkernels, where components like device drivers and IPC providers run in user space, outside of ring 0. Therefore, this architecture requires additional context switches when performing system calls (because the performing module might reside in user space) and handling interrupts (to relay the interrupts to the device drivers).
"Context switch" could mean one of a couple of things, both relevant: (1) switching from user to kernel mode to process the system call, or an involuntary switch to kernel mode to process an interrupt against the interrupt stack, or (2) switching to run another user process in user space, with a jump to kernel space in between the two.
Any movement from user space to kernel space implies saving enough user-space to return to it reliably. If the kernel-space code decides that - while you're no longer running the user-code for that process - it's time to let another user-process run, it gets in.
So at the least, you're talking 2-3 stacks or places to store a "context": hardware-interrupts need a kernel-level stack to say what to return to; user method/subroutine calls use a standard stack for getting that done. Etc.
The original Unix kernels - and the model isn't that different now for this part - ran the system calls like a short-order cook processing breakfast orders: move this over on the stove to make room for the order of bacon that just arrived, start the bacon, go back to the first order. All in kernel switching context. Was not a huge monitoring application, which probably drove the IBM and DEC software folks mad.
When making a system call in Linux, a context switch is done from user-space to kernel space (ring3 to ring0). Each process has an associated kernel mode stack, that is used by the system call. Before the system call is executed, the CPU registers of the process are stored on its user-mode stack, this stack is different from the kernel mode stack, and is the one which the process uses for user-space executions.
When a process is in kernel mode (or user mode), calling functions of the same mode will not require a context switch. This is what is referred by the first quote.
The second quote refers to the kernel mode stack, and not the user-mode stack.
Having said this, I must mention Linux optimisations, where no transition is needed to the kernel space for executing a system call, i.e. all processing related to the system call is done in the user space itself (thus no context switch). vsyscall, and VDSO are such techniques. The idea behind them is quite simple. It is to send to the user space, the data that is required for execution of the corresponding system call. More info can be found in this LWN article.
In addition to this, there have been some research projects in which all the execution happens in the same ring. User space programs, and the OS code, both reside in the same ring. Idea is to get rid of the overhead of ring switches. Microsoft's [singularity][2] OS is one such project.

Ordered call stacks - with Instruments Time Profiler?

I am trying to fix a randomly occurring crash in a iOS app (EXC_BAD_ACCESS (SIGSEGV) KERN_INVALID_ADDRESS at 0xe104019e). The application performs many operations at the same time - loading data with NSURLConnection, redrawing a complex layout with many nested UIViews, drawing UIImages, caching downloaded files on the local disk, finding out files availability in local cache etc. etc. These operations are distributed into a number of threads and when the crash takes place I am totally confused what exact steps preceded it.
I would like to use a method to track the call stacks in the order of occurrence distributed into threads and see what objects methods were called during last 1-5ms before the crash, then I could isolate the bug that produces the crash.
At first sight the Time Profiler in Instruments offers this kind of tracking ability with many details but sampled call stacks seem to be presented in a random order - or maybe I get it wrong...
Is there a method that would tell me what exactly happens in the order of execution with ?
1) Run with zombies enabled, and removed all compiler and static analyzer warnings.
2) It is likely a threading issue, where you are not protecting your shared data appropriately.
Instruments is quite powerful regarding thread sorting/filtering, but there is no exact template for what you are trying to accomplish.
If you can reduce the problem to a small collection of object types, then you can follow execution based on the reference counts if you add the allocations instrument and enable reference count recording.

Power Management in Symbian

Are there any "best practices" for writing a power-efficient background application in Symbian?
Specifically, is there any way (i.e. API) for a Symbian app to hint the OS regarding its current state in order to reduce battery consumption?
In Android, for instance, there is the notion of Wake Locks, which prevents the device from going into standby mode - Is there anything similar in Symbian?
EDIT:
Are there any implications when running code as a separate thread with the Open-C library, and not as "native" Symbian C++, using Active Objects etc.? (the Open-C code is blocking on IO most of the time).
You can check user (in-)activity with a RTimer::Inactivity() method. This way is described in Forum Nokia Wiki page. There it's also described how you can reset inactivity timer.
You can check whether device screen is turned on or off using HAL API. See classes HAL and HALData. You may use such a call:
TInt displayState;
HAL::Get(HALData::EDisplayState, displayState);
And the displayState will hold either 0 if display is turned off or 1 in other case.
With these APIs you will know whether user is active now, so you'll be able to change behavior of your background service to reduce its power consumption.
You can also use Nokia Energy Profiler application to record power consumption of handset, with different power saving options of your background service. Also please refer to Nokia's document describing best practices to save power of device. This document is quite straightforward, but useful nonetheless.
Hope this helps.
EDIT: About separate thread and Open C. As far as I know, Open C is just a plugin and deep down all the implementations are still "native Symbian". So, as far as you avoid periodic polling of some resource and just use usual blocking IO, your code is quite same economical on power as standard Symbian Active Objects techniques (which use Symbian-specific semaphores to block threads).
I have not come across anything special in Symbain to keep the device out of stand-by mode. Basically the "best practices" would be the same as all mobile devices:
Don't loop waiting for things, always use whatever signaling services avaialble on the platform, for Symbain ActiveObjects / User::WaitForXxx
Limit the number of background threads (currently all mobile devices are still only 1 CPU...)
Don't hang onto system services, close them ASAP (this is normally my main battery drain in my mobile applications, sometimes trying to find which system service causes the most battery drain can be a real pain, WinMo is very bad for this).
For me, I find that it mostly comes down to a tradeoff between battery life and performance / responsiveness for the application. Unfortunately power that be always seem to side with the performance / responsiveness side and damn the battery drain.....
Give your application low priority (see RProcess and RThread classes). Your approach will really depend on what your background application does. These things consume most battery: radio (GSM/3G/WIFI/BlueTooth), screen backlight, file accesses.
Symbian OS will always try to put your application to sleep, you don't need to tell it to do this. Just make sure your approach gives it the opportunity to put it to sleep.
Power management is an very most important issue while developing application.
In Symbian it depends on what you are using to run background activities .
Whether you are using Thread or ActiveX control.
For Eg. you are developing application browser that you want the browser to download something then that downloading activity should go in background and able activity starts and when to show progress and when it finishes it should again come to fore end.
It depends on how you are managing thread if you are using thread. You can do like which thread to pause when the long time taking activity starts and when to resume when background activity has finishes execution..
In fact this is the very good topic u have come across
There used to be an inactivity timer which could be reset by the application. This would prevent the screen from going into any screen saver mode.
If you use the various asynchronous function in Symbian, your app will run when appropriate.
One of these methods should work depending on your needs. If you describe what you want to achieve in more detail it would be easier to help you.