It's says on their website that support is coming, but other people have said it already works. Does anyone know as I would like to know before purchasing it. If it doesn't support SpriteKit does anyone know if there is an alternative?
There have been and are still issues with sprite kit on the Mac which has slowed down our ability to support SK in Particle Designer. We are still working on support as well as also providing our own particle system to use with SK which will help if the SK bugs are not fixed quickly. Our own particle system for SK will support all the systems currently available in PD. We are testing at the moment and hope to release our own particle system soon.
Related
For my university project I must develop a windows app which recognises a user based on two biometrics - fingerprint and facial heat signature. This is very new and exciting territory for me as I will encounter difficult challenges that I have not yet faced and the learning curve will be very steep but fruitful.
My question relates to the camera which I will attempt to use for facial heat signature recognition. This is it: http://support.logitech.com/en_us/product/brio
It is relatively new and Logitech have not released any dev SDK for it and as such I am stuck on how to get under its hood/bonnet and integrate it with my app. I am looking for advice on how I can go about doing it and assess whether it is feasible, in any case. If it is not then I can not afford to waste my time on it and will have to come up with new ideas.
As an aside, it can be used for Windows Hello.
In short, I am looking for advice on how I can approach this challenge or whether I should at all. Thank you.
Try to access through MediaCapture and MediaFrameSource classes. It works for me. But its only 340x340 30fps camera. And IR diode blinking about 15-20 Hertz so there is blinks in IR frame.
C# used.
I've recently adquired a new MS Kinect v2 for Windows, and i'm messing with it in order to learn how it works, and how I would aproach my future ideas for it.
By now, i'm only teasing the samples that comes with the Kinect browser (Downloaded with the new SDK), using an almost new Toshiba C55 NoteBook (i5 2.5GH, 8GB RAM, NVidia 710M).
The fact is that i've tried the "Coordinate Mapping basics" sample, that comes in many forms (D2D, XAML, HTML and WPF). This sample just removes the background using the depth frame.
I've tried the all the versions so far, and the XAML sample runs very very very very slow... while the rest are running very very very smooth...
So i've tried an external code extracted from GitHub which technically does the same, also using XAML. And it also runs too slow.
Due the fact that i'm not used to develop for MS platforms, i don't know if it is really a hardware problem, or if XAML has higer requirements, and I cannot figure out why is it behaving so bad only with XAML.
I tried to find any similar questions, but didn't found any that seemed useful for my case.
I know that is probably my fault, but I don't know why... Maybe a misunderstanding of the whole setup?
The external sample I found: https://github.com/Vangos/kinect-2-background-removal
Also tried the CoordinateMapper from the same GitHub, same issue: https://github.com/Vangos/kinect-2-coordinate-mapping
Thank you all.
UPDATE:
After developing and deploying the WPF app succefully, I'd started to check the performance of the Kinect with Windows RT, and I'd find lots of problems at memory level, W8.1 RT is slow, and does not support Kinect V2 very well, at least in my testing HW. This problems may lead to the symptoms described in this other question I found: Kinect camera freeze
This issue also made me note that the new Kinect V2 is VERY VERY sensitive to ambient temperature.
Hope this helps some Overflowed developars with similar problems :).
The Coordinate Mapper XAML and Coordinate Mapper WPF samples both use XAML. The version marked "XAML" is a Windows Store App. The version marked "WPF" is a Windows Desktop app. I didn't see much of a difference on my machine between the two until I ran the Performance and Diagnostics tools in Visual Studio 2013. I suggest running them and creating an analysis report. That will help you discover what exactly is causing the differences.
Does anyone know of any way to programmatically control the RPMs of a Mac's fans? I briefly checked the Apple Dev site, but couldn't find anything. I'm guessing it's not as easy as:
[fans faster];
I'm wondering how smcFanControl achieves this. Am I right to assume that the "smc" in "smcFanControl" stands for System Management Controller?
Update:
smcFanControl source code is released under GLP license! ^_^ Oh, yeah! Free knowledge!
You are correct on two counts: SMC does stand for "System Management Controller", and fooling around with it isn't as simple as [fans faster]. Programming the SMC requires knowledge of the firmware and some down-and-dirty hardware device driver programming. You probably have to talk to the manufacturer just to get the specs, and if you do, you're not going to be able to program it in Objective C. Alas, you're probably better off trying to control smcFanControl using AppleScript :)
this answer may not able to help. I don't have Mac so I don't know how it works, just tell some basics.
Generally, a computer FAN speed control is handled by SuperIO chip or BMC(bus management controller) chip.
if ur board using a SuperIO, then it is very hard to modify the FAN speed since the speed adjustment algorithm is fixed (fused) inside the chip.
if the board using BMC or similar solution, it will use Firmware to control the speed. most the firmware is upgrade-able by special tool.
above solutions will provide hardware level fan speed control, but the OS level can not change the speed.
I suppose the smcFancontrol in Mac is not a command to change the FAN speed, but able to enable "smart FAN speed control" function.
I'm interested in writing some homebrew code for the Microsoft Kinect console. I have a few applications which I think would translate well to the platform. I've been toying with the idea of giving it a shot using the OpenKinect drivers and libraries. Obviously this would be a lot of work, but I am wondering just how much. Does anyone have experience with OpenKinect? Do you get only the raw video/audio data from the device, or has anyone written higher level abstractions to make common tasks easier?
The OpenKinect library is basically a driver — at least for now — so don't expect much high functions from it. You will more or less get the raw data from both the depth and the video cameras.
This is basically an array received in a callback function each time a frame arrives.
You can give it a try by following the instructions provided on the OpenKinect website, it's really quick to install and try it, and you can play a bit with the glview application provided to get a feeling of what's possible.
I've set up a few demos using opencv, and got pretty cool results even though I didn't have much background in computer vision so I can only encourage you to try it yourself!
Alternately, if you're looking for more advanced functions, the OpenNI framework was just released this week and provides some impressive high level algorithms such as skeleton tracking and some gesture recognition. Part of the framework is proprietary algorithms from PrimeSense (like the powerful skeleton tracking module...). I haven't tried it yet and don't know how well it integrates with the kinect and the different OS, but since a bunch of guys from different groups (OpenKinect, Willow Garage...) are working hard on it that shouldn't be an issue within a week.
Elaborating further on what Jules Olleon wrote, i've worked with OpenNI (http://www.openni.org) and the algorithms above it (NITE), and I highly recommend using these frameworks. Both frameworks are well-documented, and come with numerous samples from which you can start out.
Basically, OpenNI abstracts the lower-level details of working with the sensor and its driver for you, and gives you a convenient way to get what you want from a "generator" (e.g. xn::DepthGenerator for getting the raw depth data). OpenNI is open-source and free to use in any application. OpenNI also handles the platform-abstraction for you. As of today, OpenNI is supported and works fine for Windows 32/64 and linux, and is in the process of being ported to OSX. Bindings are available for use in multiple programming languages (C, C++, .NET, Python, and a few others I believe).
NITE has additional interfaces built above OpenNI, which give you higher-level results (e.g. track a hand-point, skeletons, scene analysis etc). You'll want to check the subtleties of NITE's license regarding when/where you can use it, but it's still probably the easiest and fastest way to get analysis (e.g. skeleton) for now. NITE is closed-source, so PrimeSense need to supply a binary version for you to use. Currently windows and linux versions are available.
I haven't worked with with OpenKinect but I've been working with OpenNI and SensorKinect for a few months now for my research. If you are planning to work with raw data from Kinect, they work great in giving you depth and video (they don't support motor control). I've used it with C++ and OpenGL in both Windows 64bit and Ubuntu 32bit with almost no modifications to the code. It's very easy to learn if you know basic c++. Installing it might be a little headache.
For more advanced features such as skeleton detection, gesture recognition, etc., I highly recommend using the middlewares such as NITE with OpenNI or the ones provided in here: Middlewares developed around OpenNI rather than re-inventing the wheel. Nite is also very easy to use once you have OpenNI working; e.g. joint recognition is something around 10-20 extra lines of code.
Something that I would recommend to my younger self would be to learn and work with a basic game engine (e.g. Unity) rather than directly with OpenGL. It would give you a lot better and more enjoyable graphics, less hassle and would also enable you to easily integrate your program with other tools such as PhysX. I haven't tried any, but I know there are some plugins for using Kinect drivers in Unity.
In the last two months I've worked as a simple application using a computer vision library(OpenCV).
I wish to run that application directly from the webcam without the need of an OS. I'm curious to know if that my application can be burned into a chip in order to not have the OS to run it.
Ofcorse the process can be expensive, but I'm just curious. Do you have any links about that?
ps: the application is written in C.
I'd use something bigger than a PIC, for example a small 32 bit ARM processor.
Yes. It is theoretically possible to port your app to PIC chips.
But...
There are C compilers for the PIC chip, however, due to the limitations of a microcontroller, you might find that the compiler, and the microcontroller itself is far too limited for computer vision work, especially if your initial implementation of the app was done on a full-blown PC:
You'll only have integer math available to you, in most cases, if not all (can't quote me on that, but our devs at work don't have floating point math for their PIC apps and it causes many foul words to emanate from their cubes). Either that, or you'll need to hook to an external math coprocessor.
You'll have to figure out how to get the PIC chip to talk USB to the camera. I know this is possible, but it will require additional hardware, and R&D time.
If you need strict timing control,
you might even have to program the
app in assembler.
You'd have to port portions of OpenCV to the PIC chip, if it hasn't been already. My guess is not.
If your'e not already familiar with microcontroller programming, you'll need some time to get up to speed on the differences between desktop PC programming and microcontroller programming, and you'll have to gain some experience in that. This may not be an issue for you.
Basically, it would probably be best to re-write the whole program from scratch given a PIC chip constraint. Good thing is though, you've done a lot of design work already. It would mainly be hardware/porting work.
OR...
You could try using a small embedded x86 single-board PC, perhaps in the PC/104 form factor, with your OS/app on a CF card. It's a real bone fide PC, you just add your software. Good thing is, you probably wouldn't have to re-write your app, unless it had ridiculous memory footprint. Embedded PC vendors are starting to ship boards based on 1 GHz Intel Atoms, and if you needed more help you could perhaps hook a daughterboard onto the PC-104 bus. You'll work around all of the limitations listed above, as your using an equivalent platform to the PC you developed your app on. And it has USB ports! If you do a thorough cost analysis and if your'e cool with a larger form factor, you might find it to be cheaper/quicker to use a system based on a SBC than rolling a solution using PIC chips/microcontrollers.
A quick search of PC-104 on Google would reveal many vendors of SBCs.
OR...
And this would be really cheap - just get a off-the-shelf cheap Netbook, overwrite the OEM OS, and run the code on there. Hackish, but cheap, and really easy - your hardware issues would be resolved within a week.
Just some ideas.
I think you'll find this might grow into pretty large project.
It's obviously possible to implement a stand-alone hardware solution to do something like this. Off the top of my head, Rabbit's solutions might get you to the finish-line faster. But you might be able to find some home-grown Beagle Board or Gumstix projects as well.
Two Google links I wanted to emphasize:
Rabbit: "Camera Interface Application Kit"
Gumstix: "Connecting a CMOS camera to a Gumstix Connex motherboard"
I would second Nate's recommendation to take a look at Rabbit's core modules.
Also, GHIElectronics has a product called the Embedded Master that runs .Net MicroFramework and has USB host/device capabilities built-in as well as a rich library that is a subset of the .Net framework. It runs on an Arm processor and is fairly inexpensive (> $85). Though not nearly as cheap as a single PIC chip it does come with a lot of glue logic pre-built onto the module.
CMUCam
I think you should have a look at the CMUcam project, which offers affordable hardware and an image processing library which runs on their hardware.