For my university project I must develop a windows app which recognises a user based on two biometrics - fingerprint and facial heat signature. This is very new and exciting territory for me as I will encounter difficult challenges that I have not yet faced and the learning curve will be very steep but fruitful.
My question relates to the camera which I will attempt to use for facial heat signature recognition. This is it: http://support.logitech.com/en_us/product/brio
It is relatively new and Logitech have not released any dev SDK for it and as such I am stuck on how to get under its hood/bonnet and integrate it with my app. I am looking for advice on how I can go about doing it and assess whether it is feasible, in any case. If it is not then I can not afford to waste my time on it and will have to come up with new ideas.
As an aside, it can be used for Windows Hello.
In short, I am looking for advice on how I can approach this challenge or whether I should at all. Thank you.
Try to access through MediaCapture and MediaFrameSource classes. It works for me. But its only 340x340 30fps camera. And IR diode blinking about 15-20 Hertz so there is blinks in IR frame.
C# used.
Related
I wanted to know what steps one would need to take to "hack" a camera's firmware to add/change features, specifically cameras of Canon or Olympus make.
I can understand this is an involved topic, but a general outline of the steps and what I issues I should keep an eye out for would be appreciated.
I presume the first step is to take the firmware, load it into a decompiler (any recommendations?) and examine the contents. I admit I've never decompiled code before, so this will be a good challenge to get me started, any advice? books? tutorials? what should I expect?
Thanks stack as always!
Note : I know about Magic Lantern and CHDK, I want to get technical advise on how they were started and came to be.
http://magiclantern.wikia.com/wiki/Decompiling
http://magiclantern.wikia.com/wiki/Struct_Guessing
http://magiclantern.wikia.com/wiki/Firmware_file
http://magiclantern.wikia.com/wiki/GUI_Events/550D
http://magiclantern.wikia.com/wiki/Register_Map/Brute_Force
I wanted to know what steps one would need to take to "hack" a
camera's firmware to add/change features, specifically cameras of
Canon or Olympus make.
General steps for this hacking/reverse engineering:
Gathering information about the camera system (main CPU, Image coprocessor, RAM/Flash chips..). Challenges: Camera system makers tend to hide such sensitive information. Also, datasheets/documentation for proprietary chips are not released to public at all.
Getting firmware: through dumping Flash memory inside the camera or extracting the firmware from update packages used for camera firmware update. Challenges: Accessing readout circuitry for flash is not a trivial job specially with the fact that camera systems have one of the most densely populated PCBs. Also, Proprietary firmware are highly protected with sophisticated encryption algorithms when embedded into update packages.
Dis-assembly: getting a "bit" more readable instructions out of the opcode firmware. Challenges: Although dis-assemblers are widely available, they will give you the "operational" equivalent assembly code out of the opcode with no guarantee for being human readable/meaningful.
Customization: Just after understanding most of the code functionalities, you can make modifications that need not to harm normal operation of the camera system. Challenges: Not an easy task.
Alternatively, I highly recommend you to look for an already open source camera software (also HW). You can learn a lot about camera systems.
Such projects are: Elphel and AXIOM
I am right now working on one application where I need to find out user's heartbeat rate. I found plenty of applications working on the same. But not able to find a single private or public API supporting the same.
Is there any framework available, that can be helpful for the same? Also I was wondering whether UIAccelerometer class can be helpful for the same and what can be the level of accuracy with the same?
How to implement the same feature using : putting the finger on iPhone camera or by putting the microphones on jaw or wrist or some other way?
Is there any way to check the blood circulation changes ad find the heart beat using the same or UIAccelerometer? Any API or some code?? Thank you.
There is no API used to detect heart rates, these apps do so in a variety of ways.
Some will use the accelerometer to measure when the device shakes with each pulse. Other use the camera lens, with the flash on, then detect when blood moves through the finger by detecting the light levels that can be seen.
Various DSP signal processing techniques can be used to possibly discern very low level periodic signals out of a long enough set of samples taken at an appropriate sample rate (accelerometer or reflected light color).
Some of the advanced math functions in the Accelerate framework API can be used as building blocks for these various DSP techniques. An explanation would require several chapters of a Digital Signal Processing textbook, so that might be a good place to start.
I'm working on a pretty complicated app right now, but I just got a really good, niche market idea for an AR game for iPhone. I would love to get some preliminary research done on whether or not it is worth the effort. I got a few (about 4 days) in which to code this. Is this a realistic timeline for what I'm trying to accomplish?
While I'm pretty familiar with the CMDeviceMotion, and can get location updates from GPS, there are 4 features that I think may take a colossal amount of work:
1) Working with camera in real time to draw augmented reality controls. Are there any good tutorials on how to overlay a view on top of a live camera feed?
2)Making the app work when GPS reception is spotty. It seems that some apps know how to keep updating the location based on accelerometer/gyroscope from the last known location. Where would I start on this front?
3)The networking component. I'm very new to multiplayer games. I got a website that can run PHP. Should I abandon my networking idea until I get a web server? Or is there some way I can run this in P2P over 3G without a base station?
4)Google maps integration for fast updates. Does this take a lot of effort?
I'm sorry if any of these questions are too broad and vague. I'm very excited about this idea, but would like to know what I'm dealing with before spending time on the app and realizing that I'm dealing with a monumental task!
I think you are dealing with a monumental task (especially the multiplayer part, where you'll encounter issues like lag/timing).
For the augmented reality part of your project, you can take a look at mixare augmented reality engine. It's free and open source software and the code is available on github: https://github.com/mixare/
Be aware that if you base your code upon mixare, you'll have to release your app under the same GPLv3 license as mixare.
Good luck for your project!
HTH,
Daniele
I'm interested in writing some homebrew code for the Microsoft Kinect console. I have a few applications which I think would translate well to the platform. I've been toying with the idea of giving it a shot using the OpenKinect drivers and libraries. Obviously this would be a lot of work, but I am wondering just how much. Does anyone have experience with OpenKinect? Do you get only the raw video/audio data from the device, or has anyone written higher level abstractions to make common tasks easier?
The OpenKinect library is basically a driver — at least for now — so don't expect much high functions from it. You will more or less get the raw data from both the depth and the video cameras.
This is basically an array received in a callback function each time a frame arrives.
You can give it a try by following the instructions provided on the OpenKinect website, it's really quick to install and try it, and you can play a bit with the glview application provided to get a feeling of what's possible.
I've set up a few demos using opencv, and got pretty cool results even though I didn't have much background in computer vision so I can only encourage you to try it yourself!
Alternately, if you're looking for more advanced functions, the OpenNI framework was just released this week and provides some impressive high level algorithms such as skeleton tracking and some gesture recognition. Part of the framework is proprietary algorithms from PrimeSense (like the powerful skeleton tracking module...). I haven't tried it yet and don't know how well it integrates with the kinect and the different OS, but since a bunch of guys from different groups (OpenKinect, Willow Garage...) are working hard on it that shouldn't be an issue within a week.
Elaborating further on what Jules Olleon wrote, i've worked with OpenNI (http://www.openni.org) and the algorithms above it (NITE), and I highly recommend using these frameworks. Both frameworks are well-documented, and come with numerous samples from which you can start out.
Basically, OpenNI abstracts the lower-level details of working with the sensor and its driver for you, and gives you a convenient way to get what you want from a "generator" (e.g. xn::DepthGenerator for getting the raw depth data). OpenNI is open-source and free to use in any application. OpenNI also handles the platform-abstraction for you. As of today, OpenNI is supported and works fine for Windows 32/64 and linux, and is in the process of being ported to OSX. Bindings are available for use in multiple programming languages (C, C++, .NET, Python, and a few others I believe).
NITE has additional interfaces built above OpenNI, which give you higher-level results (e.g. track a hand-point, skeletons, scene analysis etc). You'll want to check the subtleties of NITE's license regarding when/where you can use it, but it's still probably the easiest and fastest way to get analysis (e.g. skeleton) for now. NITE is closed-source, so PrimeSense need to supply a binary version for you to use. Currently windows and linux versions are available.
I haven't worked with with OpenKinect but I've been working with OpenNI and SensorKinect for a few months now for my research. If you are planning to work with raw data from Kinect, they work great in giving you depth and video (they don't support motor control). I've used it with C++ and OpenGL in both Windows 64bit and Ubuntu 32bit with almost no modifications to the code. It's very easy to learn if you know basic c++. Installing it might be a little headache.
For more advanced features such as skeleton detection, gesture recognition, etc., I highly recommend using the middlewares such as NITE with OpenNI or the ones provided in here: Middlewares developed around OpenNI rather than re-inventing the wheel. Nite is also very easy to use once you have OpenNI working; e.g. joint recognition is something around 10-20 extra lines of code.
Something that I would recommend to my younger self would be to learn and work with a basic game engine (e.g. Unity) rather than directly with OpenGL. It would give you a lot better and more enjoyable graphics, less hassle and would also enable you to easily integrate your program with other tools such as PhysX. I haven't tried any, but I know there are some plugins for using Kinect drivers in Unity.
I'd like to start messing around programming and building something with an Arduino board, but I can't think of any great ideas on what to build. Do you have any suggestions?
I show kids, who have never programmed, or done any electronics before, to make a simple 'Phototrope', a light sensitive robot, in about a day. It costs under £30 (GBP) including Arduino, electronics and off-the-shelf mechanics. If folks really get into mobile robots, the initial project can grow and grow (which I feel is part of the fun).
There are international robot competitions which require relatively simple mechanics to get started, e.g. in the UK http://www.tic.ac.uk/micromouse/toh.asp
Ultimate performance require specially built machines (for lightness) , but folks would get creditable results with an Arduino Nano, the right electronics, and a couple of good motors.
A line following robot is the classic mobile robot project. The track can be as simple as electrical tape. Pololu have some fun videos about their near-Arduino 3PI robot. The sensors are about £1, and there are a bunch of simple motor+gearbox kits from lots of places for under £10. Add a few £ for motor control, and you have autonomous robot mechanics, in need of programming! Add an Infrared Remote receiver (about £1), and you can drive it around using your TV remote. Add a small solar cell, use an Arduino analogue input to measure voltage, and it can find the sun. With a bit more electronics, it can 'feed' itself. And so it gets more sophisticated. Each step might be no more than a few hours to a few days effort, and you'll find new problems to solve and learn from.
IMHO, the most interesting (low-cost) competitions are maze solving robots. The international competition rule require the robot to explore a walled maze, usually using Infrared sensors, and calculate their optimal route. The challenges include keeping track of current position to near-millimeter accuracy, dealing with real world's unpredictably noisy environment and optimising straight-line speed with shortest distance cornering.
All that in 16K of program, and 1K RAM, with real-time interrupt handling (as much as 100K interrupts/second for some motor systems), sensor sampling, motor speed control, and maze solving is an interesting programming challenge. (You might make it 'easy' with 32K of program, and 2K RAM :-)
I'm working on a 'constrained' robot challenge (based on Arduino) so that robot performance is mainly about programming rather than having a big budget.
Start small and build up to something more complex. Control servos. Blink LEDs. Debounce inputs. Read analog sensors. Display text on an LCD. Then put it together.
Despite the name, I like the "Evil Genius" book for PIC microcontrollers because of the small, easily digestible projects that tend to build on one another. It is, of course, aimed at PIC programmers rather than the Arduino, but the material covered will be useful no matter what you're developing on.
I know Arduino is trendy right now, but I also like the Teensy++ development board because of its low price-point ($24), breadboard-compatible PCB, relatively high pin count, Linux development environment, USB connectivity, and not needing a programmer. Worth considering for smaller projects.
If you come up with something cool, let me know. I need an excuse to do something fun :)
Bicycle-related ideas:
theft alarm (perhaps with radio link to a base station which is connected to a PC by Ethernet)
fancy trip computer (with reed switch or opto sensor on wheel)
integrate with a GPS telematics unit (trip logging) with Ethernet/USB download of logged data to PC. Also has an interesting PC programming component--integrate with Google Maps.
Other ideas:
Clock with automatic time sync from:
GPS receiver
FM radio signal with embedded RDS data with CT code
Digital radio (DAB+)
Mobile phone tower (would it require a subscription and SIM card for this receive-only operation?)
NTP server via:
Ethernet
WiFi
ZigBee (with a ZigBee coordinator that gets its time from e.g. Ethernet or GPS)
Mains electricity smart meter via ZigBee (I'm interested now that smart meters are being introduced in Victoria, Australia; not sure if the smart meters broadcast the time info though, and whether it requires authentication)
Metronome
Instrument tuner
This reverse-geocache puzzle box was an awesome Arduino project. You could take this to the next step, e.g. have a reverse-geocache box that gives out a clue only at a specific location, and then using physical clues found at that location coupled with the next clue from the box, determine where to go for the next step.
You could do one of the firefighting robot competitions. We built a robot in university for my bachelor's final project, but didn't have time to enter the competition. Plus the robot needed some polish anyway... :)
Video here.
Mind you, this was done with a Motorola HC12 and a C compiler, and most components outside the microcontroller board were made from scratch, so it took longer than it should. Should be much easier with prefab components.
Path finding/obstacle navigation is typically a good project to start with. If you want something practical, take a look at how iRobot vacuums the floor and come up with a better scheme.
Depends on your background and if you want practical or cool. On the practical side, a remote control could be a simple starting point. It's got buttons and lights but isn't too demanding.
For a cool project maybe a Simon-style memory game or anything with lights & noises (thinking theremin-style).
I don't have suggestions or perhaps something like a line follower robot. I could help you with some links for inspiration
Arduino tutorials
Top 40 Arduino Projects of the Web
20 Unbelievable Arduino Projects
I'm currently developing plans to automate my 30 year old model train layout.
A POV device could be fun to build (just google for POV Arduino). POV means persistence of vision.