Running MS Kinect Color, Skeleton and Depth streams in different threads - kinect

Can someone provide examples of how to run MS Kinect Color, Skeleton and Depth streams in different threads ? I have searched the internet but not able to find anything. Thanks in advance.

The KinectExplorer example in the Microsoft Kinect Developer Toolkit provides a KinectDepthViewer control that shows how to process the depth data in a different thread -- the DepthColorizer class. The concepts can be adapted to process skeleton data as well.
You don't explain why you are wanting to run these on different threads, so it is unclear why you would need to do so. All the data is gathered off the UI thread, in their own process already. It is when you want to work with them on the UI thread that matters...
The color stream is just an RGB stream. There may be some processing you need to do to this image (e.g., skinning and face tracking), but generally it is not used as much as the others. The only processing generally required is to copy the bits from the stream into an Image for display, which has to be done on the UI thread anyway.
If you wish to color the depth stream for any reason, doing so on a non-UI thread is beneficial. If you're doing some special processing on it, that could be done on a non-UI thread too. The above example code could be easily adapted.
The skeleton stream already requires the most effort by the CPU, but all that effort is already done for you away from the UI. Once you have a chance to touch it, the data is just a series of objects and arrays. I can't really see what you would need to do on a separate thread at this point.
If you explain what you are trying to accomplish the need for separate processing threads may be more clear.

Related

Programmable "real-time" MIDI processing

In my band, all musicians have both hands busy at any time. However, we want to add whole synthesizer chords (1/4 .. whole note length), maybe triggered by a simple foot switch every time (because playing along a sequencer is currently too difficult for us).
Some time ago I wrote a (Windows) console application in C (MinGW) that converted incoming MIDI events to text, piped that text to an external program (AWK script), and re-converted that external program's text output back to MIDI events.
Basically every sort of filtering or event generation was possible; I actually produced chords triggered by simple control messages; I kept note-ONs in memory to be able to -OFF them whenever a new chord was sent, etc. - the actual processing (execution) times were not a problem at all(!)
But I had to understand that not only latency, but also the notoriously unreliable (with respect to "when", "for how long") user application OS multitasking/switching made this concept practically worthless at least for "real-time" use. There were always clearly perceivable delays, of unpredictable duration.
I read about user-mode driver programming and downloaded some resources, but somehow stopped working on that project without a real result.
Apart from that specific project, I even have some experience in writing small "virtual" machines that allow for expressing exactly the variables, conditionals and math, stored as a token tree and processed quite fast. Maybe there is also the option to embed Lua, V8, or anything like that. So calling another (external) program is not necessarily the issue here, since that can be avoided.
The problem that remains is that the processing as a whole is still done by a (user) application. So I figure there is no way around a (user mode) driver, in this scenario.
Alternatively, I was even considering (more "real-time") hardware - a Raspi or the like - but then the MIDI interface may be an additional challenge.
Is there any hardware or software solution (or project) available that may serve as a base for such a _Generic MIDI filter/processor_? Apart from predictable timing behaviour, it is desirable not to need a (C) compilation environment when building filters/rules, since that "creative" step will probably happen in our rehearsal room (laptop available), which is certainly not a "programming lab". Text-based "programs" are fine - for long-term I'll maybe build a GUI for wiring/generating rules anyway.
MIDI is handled pretty well in Windows. I'm not sure the source of the original problems you had. No doubt there is some latency though.
You can handle this in real time with a microcontroller. The good news is that you don't even have to build the hardware. Off-the-shelf controllers are available for this. For example: http://www.midisolutions.com/prodevp.htm

Impossible to acquire and present in parallel with rendering?

Note: I'm self-learning Vulkan with little knowledge of modern OpenGL.
Reading the Vulkan specifications, I can see very nice semaphores that allow the command buffer and the swapchain to synchronize. Here's what I understand to be a simple (yet I think inefficient) way of doing things:
Get image with vkAcquireNextImageKHR, signalling sem_post_acq
Build command buffer (or use pre-built) with:
Image barrier to transition image away from VK_IMAGE_LAYOUT_UNDEFINED
render
Image barrier to transition image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
Submit to queue, waiting on sem_post_acq on fragment stage and signalling sem_pre_present.
vkQueuePresentKHR waiting on sem_pre_present.
The problem here is that the image barriers in the command buffer must know which image they are transitioning, which means that vkAcquireNextImageKHR must return before one knows how to build the command buffer (or which pre-built command buffer to submit). But vkAcquireNextImageKHR could potentially sleep a lot (because the presentation engine is busy and there are no free images). On the other hand, the submission of the command buffer is costly itself, and more importantly, all stages before fragment can run without having any knowledge of which image the final result will be rendered to.
Theoretically, it seems to me that a scheme like the following would allow a higher degree of parallelism:
Build command buffer (or use pre-built) with:
Image barrier to transition image away from VK_IMAGE_LAYOUT_UNDEFINED
render
Image barrier to transition image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
Submit to queue, waiting on sem_post_acq on fragment stage and signalling sem_pre_present.
Get image with vkAcquireNextImageKHR, signalling sem_post_acq
vkQueuePresentKHR waiting on sem_pre_present.
Which would, again theoretically, allow the pipeline to execute all the way up to the fragment shader, while we wait for vkAcquireNextImageKHR. The only reason this doesn't work is that it is neither possible to tell the command buffer that this image will be determined later (with proper synchronization), nor is it possible to ask the presentation engine for a specific image.
My first question is: is my analysis correct? If so, is such an optimization not possible in Vulkan at all and why not?
My second question is: wouldn't it have made more sense if you could tell vkAcquireNextImageKHR which particular image you want to acquire, and iterate through them yourself? That way, you could know in advance which image you are going to ask for, and build and submit your command buffer accordingly.
Like Nicol said you can record secondaries independent of which image it will be rendering to.
However you can take it a step further and record command buffers for all swpachain images in advance and select the correct one to submit from the image acquired.
This type of reuse does take some extra consideration into account because all memory ranges used are baked into the command buffer. But in many situations the required render commands don't actually change frame one frame to the next, only a little bit of the data used.
So the sequence of such a frame would be:
vkAcquireNextImageKHR(vk.dev, vk.swap, 0, vk.acquire, VK_NULL_HANDLE, &vk.image_ind);
vkWaitForFences(vk.dev, 1, &vk.fences[vk.image_ind], true, ~0);
engine_update_render_data(vk.mapped_staging[vk.image_ind]);
VkSubmitInfo submit = build_submit(vk.acquire, vk.rend_cmd[vk.image_ind], vk.present);
vkQueueSubmit(vk.rend_queue, 1, &submit, vk.fences[vk.image_ind]);
VkPresentInfoKHR present = build_present(vk.present, vk.swap, vk.image_ind);
vkQueuePresentKHR(vk.queue, &present);
Granted this does not allow for conditional rendering but the gpu is in general fast enough to allow some geometry to be rendered out of frame without any noticeable delays. So until the player reaches a loading zone where new geometry has to be displayed you can keep those command buffers alive.
Your entire question is predicated on the assumption that you cannot do any command buffer building work without a specific swapchain image. That's not true at all.
First, you can always build secondary command buffers; providing a VkFramebuffer is merely a courtesy, not a requirement. And this is very important if you want to use Vulkan to improve CPU performance. After all, being able to build command buffers in parallel is one of the selling points of Vulkan. For you to only be creating one is something of a waste for a performance-conscious application.
In such a case, only the primary command buffer needs the actual image.
Second, who says that you will be doing the majority of your rendering to the presentable image? If you're doing deferred rendering, most of your stuff will be written to deferred buffers. Even post-processing effects like tone-mapping, SSAO, and so forth will probably be done to an intermediate buffer.
Worst-case scenario, you can always render to your own image. Then you build a command buffer who's only contents is an image copy from your image to the presentable one.
all stages before fragment can run without having any knowledge of which image the final result will be rendered to.
You assume that the hardware has a strict separation between vertex processing and rasterization. This is true only for tile-based hardware.
Direct renderers just execute the whole pipeline, top to bottom, for each rendering command. They don't store post-transformed vertex data in large buffers. It just flows down to the next step. So if the "fragment stage" has to wait on a semaphore, then you can assume that all other stages will be idle as well while waiting.
wouldn't it have made more sense if you could tell vkAcquireNextImageKHR which particular image you want to acquire, and iterate through them yourself?
No. The implementation would be unable to decide which image to give you next. This is precisely why you have to ask for an image: so that the implementation can figure out on its own which image it is safe for you to have.
Also, there's specific language in the specification that the semaphore and/or event you provide must not only be unsignaled but there cannot be any outstanding operations waiting on them. Why?
Because vkAcquireNextImageKHR can fail. If you have some operation in a queue that's waiting on a semaphore that's never going to fire, that will cause huge problems. You have to successfully acquire first, then submit work that is based on the semaphore.
Generally speaking, if you're regularly having trouble getting presentable images in a timely fashion, you need to make your swapchain longer. That's the point of having multiple buffers, after all.

Grand Central Dispatch vs NSThreads?

I searched a variety of sources but don't really understand the difference between using NSThreads and GCD. I'm completely new to the OS X platform so I might be completely misinterpreting this.
From what I read online, GCD seems to do the exact same thing as basic threads (POSIX, NSThreads etc.) while adding much more technical jargon ("blocks"). It seems to just overcomplicate the basic thread creation system (create thread, run function).
What exactly is GCD and why would it ever be preferred over traditional threading? When should traditional threads be used rather than GCD? And finally is there a reason for GCD's strange syntax? ("blocks" instead of simply calling functions).
I am on Mac OS X 10.6.8 Snow Leopard and I am not programming for iOS - I am programming for Macs. I am using Xcode 3.6.8 in Cocoa, creating a GUI application.
Advantages of Dispatch
The advantages of dispatch are mostly outlined here:
Migrating Away from Threads
The idea is that you eliminate work on your part, since the paradigm fits MOST code more easily.
It reduces the memory penalty your application pays for storing thread stacks in the application’s memory space.
It eliminates the code needed to create and configure your threads.
It eliminates the code needed to manage and schedule work on threads.
It simplifies the code you have to write.
Empirically, using GCD-type locking instead of #synchronized is about 80% faster or more, though micro-benchmarks may be deceiving. Read more here, though I think the advice to go async with writes does not apply in many cases, and it's slower (but it's asynchronous).
Advantages of Threads
Why would you continue to use Threads? From the same document:
It is important to remember that queues are not a panacea for
replacing threads. The asynchronous programming model offered by
queues is appropriate in situations where latency is not an issue.
Even though queues offer ways to configure the execution priority of
tasks in the queue, higher execution priorities do not guarantee the
execution of tasks at specific times. Therefore, threads are still a
more appropriate choice in cases where you need minimal latency, such
as in audio and video playback.
Another place where I haven't personally found an ideal solution using queues is daemon processes that need to be constantly rescheduled. Not that you cannot reschedule them, but looping within a NSThread method is simpler (I think). Edit: Now I'm convinced that even in this context, GCD-style locking would be faster, and you could also do a loop within a GCD-dispatched operation.
Blocks in Objective-C?
Blocks are really horrible in Objective-C due to the awful syntax (though Xcode can sometimes help with autocompletion, at least). If you look at blocks in Ruby (or any other language, pretty much) you'll see how simple and elegant they are for dispatching operations. I'd say that you'll get used to the Objective-C syntax, but I really think that you'll get used to copying from your examples a lot :)
You might find my examples from here to be helpful, or just distracting. Not sure.
While the answers so far are about the context of threads vs GCD inside the domain of a single application and the differences it has for programming, the reason you should always prefer GCD is because of multitasking environments (since you are on MacOSX and not iOS). Threads are ok if your application is running alone on your machine. Say, you have a video edition program and want to apply some effect to the video. The render is going to take 10 minutes on a machine with eight cores. Fine.
Now, while the video app is churning in the background, you open an image edition program and play with some high resolution image, decide to apply some special image filter and your image application being clever detects you have eight cores and starts eight threads to process the image. Nice isn't it? Except that's terrible for performance. The image edition app doesn't know anything about the video app (and vice versa) and therefore both will request their respectively optimum number of threads. And there will be pain and blood while the cores try to switch from one thread to another, because to avoid starvation the CPU will eventually let all threads run, even though in this situation it would be more optimal to run only 4 threads for the video app and 4 threads for the image app.
For a more detailed reference, take a look at http://deusty.blogspot.com/2010/11/introducing-gcd-based-cocoahttpserver.html where you can see a benchmark of an HTTP server using GCD versus thread, and see how it scales. Once you understand the problem threads have for multicore machines in multi-app environments, you will always want to use GCD, simply because threads are not always optimal, while GCD potentially can be since the OS can scale thread usage per app depending on load.
Please, remember we won't have more GHz in our machines any time soon. From now on we will only have more cores, so it's your duty to use the best tool for this environment, and that is GCD.
Blocks allow for passing a block of code to execute. Once you get past the "strange syntax", they are quite powerful.
GCD also uses queues which if used properly can help with lock free concurrency if the code executing in the separate queues are isolated. It's a simpler way to offer background and concurrency while minimizing the chance for deadlocks (if used right).
The "strange syntax" is because they chose the caret (^) because it was one of the few symbols that wasn't overloaded as an operator in C++
See:
https://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
When it comes to adding concurrency to an application, dispatch queues
provide several advantages over threads. The most direct advantage is
the simplicity of the work-queue programming model. With threads, you
have to write code both for the work you want to perform and for the
creation and management of the threads themselves. Dispatch queues let
you focus on the work you actually want to perform without having to
worry about the thread creation and management. Instead, the system
handles all of the thread creation and management for you. The
advantage is that the system is able to manage threads much more
efficiently than any single application ever could. The system can
scale the number of threads dynamically based on the available
resources and current system conditions. In addition, the system is
usually able to start running your task more quickly than you could if
you created the thread yourself.
Although you might think rewriting your code for dispatch queues would
be difficult, it is often easier to write code for dispatch queues
than it is to write code for threads. The key to writing your code is
to design tasks that are self-contained and able to run
asynchronously. (This is actually true for both threads and dispatch
queues.)
...
Although you would be right to point out that two tasks running in a
serial queue do not run concurrently, you have to remember that if two
threads take a lock at the same time, any concurrency offered by the
threads is lost or significantly reduced. More importantly, the
threaded model requires the creation of two threads, which take up
both kernel and user-space memory. Dispatch queues do not pay the same
memory penalty for their threads, and the threads they do use are kept
busy and not blocked.
GCD (Grand Central Dispatch): GCD provides and manages FIFO queues to which your application can submit tasks in the form of block objects. Work submitted to dispatch queues are executed on a pool of threads fully managed by the system. No guarantee is made as to the thread on which a task executes. Why GCD over threads :
How much work your CPU cores are doing
How many CPU cores you have.
How much threads should be spawned.
If GCD needs it can go down into the kernel and communicate about resources, thus better scheduling.
Less load on kernel and better sync with OS
GCD uses existing threads from thread pool instead of creating and then destroying.
Best advantage of the system’s hardware resources, while allowing the operating system to balance the load of all the programs currently running along with considerations like heating and battery life.
I have shared my experience with threads, operating system and GCD AT http://iosdose.com

Cocoa IOSurfaces and synchronization with background task pulling frames via Quicktime

I've a question regarding IOSurface on Cocoa.
After an extensive research required to switch my OPENGL realtime application to 64 bit, I've taken the only path to support Quicktime playback spawning a background thread that pulls the frames installing a frame-ready callback and then with QTVisualContextCopyImageForTime , and pass the IOSurfaceRef through RPC to the parent process.
Everything works fine but there's one main issue. In my 32 bit application I was able to serialize any call to the GL subsystem by rendering a frame, pull the QT frames for the next pass and then wait for the next V-sync. This produced a very smooth and stable result.
Using the IOSurface technique gives me no way to synchronize when my app draws a frame and when the background process pulls the IOSurface from the quicktime movie. The result is that, on a random basis, I experience performances SPYKES. Indeed using the OPENGL driver monitor raises the CPU WAIT cycles up to 10% in my 64 bit app, while I have 0% CPU Wait graph under 32 bit.
Anyone here used IOSurface in a real world application and faced issues like this one ? I've though about an interprocess mutex/lock , but considering I need to lock/unlock about 120 times x second, I was not able to find a valid solution, it doesn't seems that darwin has something like the NAMED SIGNALS available in Win32...
Any suggestion, or I should take a totally different approach to the problem ?
Thanks !

Power Management in Symbian

Are there any "best practices" for writing a power-efficient background application in Symbian?
Specifically, is there any way (i.e. API) for a Symbian app to hint the OS regarding its current state in order to reduce battery consumption?
In Android, for instance, there is the notion of Wake Locks, which prevents the device from going into standby mode - Is there anything similar in Symbian?
EDIT:
Are there any implications when running code as a separate thread with the Open-C library, and not as "native" Symbian C++, using Active Objects etc.? (the Open-C code is blocking on IO most of the time).
You can check user (in-)activity with a RTimer::Inactivity() method. This way is described in Forum Nokia Wiki page. There it's also described how you can reset inactivity timer.
You can check whether device screen is turned on or off using HAL API. See classes HAL and HALData. You may use such a call:
TInt displayState;
HAL::Get(HALData::EDisplayState, displayState);
And the displayState will hold either 0 if display is turned off or 1 in other case.
With these APIs you will know whether user is active now, so you'll be able to change behavior of your background service to reduce its power consumption.
You can also use Nokia Energy Profiler application to record power consumption of handset, with different power saving options of your background service. Also please refer to Nokia's document describing best practices to save power of device. This document is quite straightforward, but useful nonetheless.
Hope this helps.
EDIT: About separate thread and Open C. As far as I know, Open C is just a plugin and deep down all the implementations are still "native Symbian". So, as far as you avoid periodic polling of some resource and just use usual blocking IO, your code is quite same economical on power as standard Symbian Active Objects techniques (which use Symbian-specific semaphores to block threads).
I have not come across anything special in Symbain to keep the device out of stand-by mode. Basically the "best practices" would be the same as all mobile devices:
Don't loop waiting for things, always use whatever signaling services avaialble on the platform, for Symbain ActiveObjects / User::WaitForXxx
Limit the number of background threads (currently all mobile devices are still only 1 CPU...)
Don't hang onto system services, close them ASAP (this is normally my main battery drain in my mobile applications, sometimes trying to find which system service causes the most battery drain can be a real pain, WinMo is very bad for this).
For me, I find that it mostly comes down to a tradeoff between battery life and performance / responsiveness for the application. Unfortunately power that be always seem to side with the performance / responsiveness side and damn the battery drain.....
Give your application low priority (see RProcess and RThread classes). Your approach will really depend on what your background application does. These things consume most battery: radio (GSM/3G/WIFI/BlueTooth), screen backlight, file accesses.
Symbian OS will always try to put your application to sleep, you don't need to tell it to do this. Just make sure your approach gives it the opportunity to put it to sleep.
Power management is an very most important issue while developing application.
In Symbian it depends on what you are using to run background activities .
Whether you are using Thread or ActiveX control.
For Eg. you are developing application browser that you want the browser to download something then that downloading activity should go in background and able activity starts and when to show progress and when it finishes it should again come to fore end.
It depends on how you are managing thread if you are using thread. You can do like which thread to pause when the long time taking activity starts and when to resume when background activity has finishes execution..
In fact this is the very good topic u have come across
There used to be an inactivity timer which could be reset by the application. This would prevent the screen from going into any screen saver mode.
If you use the various asynchronous function in Symbian, your app will run when appropriate.
One of these methods should work depending on your needs. If you describe what you want to achieve in more detail it would be easier to help you.