I would like to develop an interface to a Wireless WiFi IP camera and stream its video in a frame in my application. I never did such a thing and would appreciate any pointers.
how should I approach this task?
Have a look at my MJPEG Decoder on CodePlex. You'll find source + binaries for almost every Windows platform (WinForms, WPF, Windows Runtime, etc.) along with samples. Let me know if you have any questions.
There are couple options that you may consider. However, it is very important to define the stream/source of video transmitter.
How to use a web cam in C# with .NET Framework 4.0 and Microsoft Expression Encoder 4
Perform live video stream processing from CaptureElement & MediaCapture
Video Panel control XAML description for Viewing Webcam in WPF
Related
I need to get raw frames from the Camera in YUV/YCbCr format on windows phone 8.1 (without Silverlight), I don't see any example on internet, is it possible using MediaCapture or CameraPreviewImageSource (Nokia SDK) ?
Thanks
The recommended way to process raw video frames on Windows Phone 8.1 is to write a custom MFT plug-in and then add it to the MediaCapture object via AddEffectAsync. The MFT acts as a DSP filter between the decoder and the XAML rich compositor.
You can choose the color space that you want to support in your MFT and Media Foundation will automatically insert color space converters for you. Keep in mind that the color spaces available on the Phone are limited. That said, NV12 is the standard color space for most video devices and is considered a 4:2:0 YUV color space.
While this sounds simple in theory it can be quite complex in practice. MFTs must be writing in C++ / MoCom. Writing an MFT does require rather deep knowledge of C++ and COM. I don't want to scare you away from giving it a try but it does have a learning curve.
Here is a sample for Windows Store that shows you how to create a MFT plug-in and add it to the MediaCapture object. Unfortunately for whatever reason this sample did not get converted to a Universal app. However, it should be easy to do the conversion. Since this is such a seminal sample I will request that we publish it as a universal app.
Media capture using capture device sample
I hope this helps,
James
I am trying to extract 2 features from the Kinect :
Captured video - I followed this guide:
http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/4ee6e7ca-123d-4838-82b6-e5816bf6529c
and succeeded to use the kinect as webcam and then used DirectShow in order to capture the video. Works just fine.
skeleton - I use the 1.7 Kinect SDK and the skeleton feature works sweet!
The Problem: Those 2 features don't work simultaneously
Each one of them works great by itself, but they just don't work together.
I have also tried checking the captured video in Skype's video settings section, while running the Skeleton Basics in the "Kinect for Windows Developer Toolkit 1.7"
Do you know why that happens and how can I fix that problem and enjoy the 2 features simultaneously?
Thanks a lot,
Guy.
this cannot be happen. I'm also working on a virtual dressing room concept and I could access the kinect joints and also the video stream too. I'm using xna so its cool with me to get the video buffer to the kinectSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
kinectSensor.SkeletonStream.Enable(new TransformSmoothParameters()
I don't know what's your approach
I was looking over the documentation trying to find anything that will
allow me the Kinect/device?
I'm trying to get accelerometer data, but not sure how. So far there
were two things I've spotted in the guide and docs:
XnModuleDeviceInterface/xn::ModuleDevice and
XnModuleLockAwareInterface/xn::ModuleLockAwareInterface.
I'm wondering if I can use the ModuleDevice Get/Set methods to talk to
the device and ask for accelerometer data.
If so, how can I get started?
Also, I was thinking, if it would be possible to 'lock' openni
functionality temporarily while I try to get accelerometer data via
freenect or something similar, then 'unlocking' after reading is
done.
Has anyone tried this before? Any tips?
I'm currently using the SimpleOpenNI wrapper and Processing, but have used OpenFrameworks and the C++ library, so the language wouldn't be very important.
The standard OpenNI Kinect drivers don't expose or allow access to any accelerometer, motor, or LED controls. All of these controls are done through the "NUI Motor" USB device (protocol reference), which the SensorKinect Kinect driver doesn't communicate with.
One way around this is to use a modified OpenNI SensorKinect driver, i.e., this one which does connect to the NUI Motor device, and exposes basic accelerometer and motor control via a "CameraAngleVertical" integer property. It appears that you should be able to read/write an arbitrary integer property using SimpleOpenNI and Processing.
If you're willing to use a non-OpenNI-based solution, you can use Daniel Shiffman's Kinect Processing library which is based on libfreenect. You'll get good accelerometer, motor, etc..., but will lose access to the OpenNI skeleton/gesture support. A similar library for OpenFrameworks is ofxKinect.
Regarding locking of OpenNI nodes, my understanding is that this just prevents properties from updating and does nothing at the USB driver level. Switching between drivers--PrimeSense-based SensorKinect and libusb-based libfreenect--at runtime is not possible. It may be possible (I haven't tried it) to configure OpenNI for the camera device, and to use freenect to communicate with the NUI Motor device. No locking/synchronization between these devices should be required.
Is it possible to capture all/any audio played by a PC into a system.io.stream, so that it can then be run through speech recognition (System.Speech.Recognition.SpeechRecognitionEngine)?
Essentially I'm looking to pefrom speech recognition on any audio on the client PC, google seems to suggest that capturing a stream like this can be done using Microsoft.DirectX.DirectSound, however I cannot honestly determine how. Any suggestions would be greatly appreciated.
Take a look at this question for a solution on Vista/Win7, and take a look at this one for WinXP.
Summary:
You can use Loopback recording with WASAPI in Vista/Win7, but there is no equivalent API in WinXP, however a partial solution can be achieved with a virtual soundcard driver.
I need to build a solution that will read from a USB camera and save the Video and Image files in Dicom Format.
I'm wondering what free tools could I use to accomplish this.
Without more details such as target operating system, or programing language, all I can do is give you some general links.
For dealing with Dicom format:
dcm4che, a DICOM Implementation in
JAVA
DICOM# (partially rewrites dcm4che open source project in C#)
C++ Open Source Dicom Library
For capturing images from a camera in Windows:
Windows Mobile 5 or older
devices.
Webcam using DirectShow.NET
(codeproject)
SO answer about using WIA
(Windows Image Adquisition)
Also - if you want to interoperate with other DICOM devices you might want to look at the visible light video DICOM supplement:ftp://medical.nema.org/medical/dicom/final/sup47_ft.pdf. This will tell you the groups/elements that the devices consuming your objects might expect.