Figuring out how to build a webcam surveillance software - video-capture

I’m a novice. I have a project idea that I need some help getting specific on. I want to build a software that uses a webcam to capture images when it detects motion. Any idea what software would be best to do this with?

Related

Advice on accessing Logitech BRIO features

For my university project I must develop a windows app which recognises a user based on two biometrics - fingerprint and facial heat signature. This is very new and exciting territory for me as I will encounter difficult challenges that I have not yet faced and the learning curve will be very steep but fruitful.
My question relates to the camera which I will attempt to use for facial heat signature recognition. This is it: http://support.logitech.com/en_us/product/brio
It is relatively new and Logitech have not released any dev SDK for it and as such I am stuck on how to get under its hood/bonnet and integrate it with my app. I am looking for advice on how I can go about doing it and assess whether it is feasible, in any case. If it is not then I can not afford to waste my time on it and will have to come up with new ideas.
As an aside, it can be used for Windows Hello.
In short, I am looking for advice on how I can approach this challenge or whether I should at all. Thank you.
Try to access through MediaCapture and MediaFrameSource classes. It works for me. But its only 340x340 30fps camera. And IR diode blinking about 15-20 Hertz so there is blinks in IR frame.
C# used.

How to start development with kinect

I am new to development with Kinect, so i would need some links on how could i get the development environment set up for kinect with Windows. Also once the environment is set up, could you also help me with some links on how to start up with basic programming for kinect.
The best way to start learning Kinect Development is from the samples that come along with the SDK. These samples will let you know how to do basic operations like enabling Color Stream, Skeleton Stream or Removing Background etc.
If you are totally new to Kinect Then this series will give you a Jump Start
http://channel9.msdn.com/Series/Programming-Kinect-for-Windows-v2
For More detailed understanding watch these Hands on Lab for Kinect.
http://kinect.github.io/tutorial/lab01/index.html

Record system audio with Visual Basic.Net

I have been stuck on a issue about recording system audio in VB.Net for quite some time now. And I can't find any proper ways to do it. I have been able in the past to make it record the Stereo Mix channel. But as we all know: The quality is absolutely horrible.
I have looked into the Bass.net library, but find it incredibly hard to understand. And the licensing agreement does not fit my usage.
Is there a way to have it record the system audio (Audio played by the computer) properly with optimal audio quality where I can save the recorded audio as a .wav or .mp3?
naudio can do that and is disstributed under the Microsoft Public License (Ms-PL). Don't know if the licenses fits your needs but at least naudio will
This will help you accomplish what you need with no third party libraries required.
http://www.codeproject.com/Articles/770246/How-to-record-any-PC-sound-through-WASAPI-and-Audi

Opening Kinect datasets and/or SDK Samples

I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.

Getting started with image processing on Mac OS X

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.