I realize there is not any public documentation about the use of the isight light sensor, however programs such as ShadowBook (shown here) are able to access the the brightness data and I was simply wondering if anyone has been able to achieve a similar result and know how to access this sensor? Thanks!
You can access the light sensor with IOService, from the IOKit library. The name for the light sensor is "AppleLMUController". Here's a good example: light sensor.
Simply put, get the service like this: io_service_t service = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleLMUController"));
Then, connect to the service using:
io_connect_t port = 0;
IOServiceOpen(service, mach_task_self(), 0, &port);
Get the light levels using: IOConnectMethodScalarIScalarO(port, 0, 0, 2, &left, &right);
Where left and right are integers that now hold the light levels of the sensors.
Note that many IOService methods return a kern_return_t variable, which will hold KERN_SUCCESS, unless the method failed. Also be sure to release the service object using IOObjectRelease(service);
EDIT: On second thought, IOConnectMethodScalarIScalarO() appears to be deprecated.
Instead, use:
uint32_t outputs = 2;
uint64_t values[outputs];
IOConnectCallMethod(port, 0, nil, 0, nil, 0, values , &outputs, nil, 0);
The left and right values will be stored in values[0] and values[1] respectively. Be aware that not all MacBooks work this way: on my mid 2010 15'' pro, both values are the same, as the light sensor is in the iSight camera.
Related
I have a project where the source device has an SVideo and a Composite connector available for capture. In DirectShow, I can use IAMCrossbar to set which one to capture from, but in MediaFoundation I only get a single video stream and a C00D3704 status when I try to start streaming (using a SourceReader). Is there any way to select the input in MediaFoundation?
NB: LEADTOOLS claims to be able to do this, but I don't know how. Nothing else I've found says how to do it.
Pointers to the correct interface and/or attributes would be enough...
The answer depends on the specific capture card, but nevertheless pretty simple. Some capture cards (like a dual head Datapath card), will appear as two separate devices (for each card in the system). Therefore, you will activate them separately, following the enumeration (error checking omitted for brevity):
UINT32 deviceCount = 0;
IMFActivate** devices = nullptr;
Microsoft::WRL::ComPtr<IMFAttributes> attributes = nullptr;
hr = ::MFCreateAttributes(attributes.GetAddressOf(), 1);
hr = ::attributes->SetGUID(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE,
MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID);
hr = ::MFEnumDeviceSources(attributes.Get(), &devices, &deviceCount);
And then activation of the device using GetMediaFoundationActivator and the member function ActivateObject.
This makes sense for a card like the one referenced above since it has separate hardware on the card for each input. And you can concurrently activate each as a result.
However it is possible for the driver to report your SVideo and Composite as one device since it will likely be using the same hardware. In that case, you will find the separate streams types on a single IMFSourceReader.
IMFMediaType* mediaType = nullptr;
HRESULT hr = S_OK;
while (hr == S_OK)
{
hr = reader->GetNativeMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, index, &mediaType);
if (hr == MF_E_NO_MORE_TYPES)
break;
// ... [ process media type ]
++index;
}
In this case, you set the stream selection (IMFSourceReader::SetStreamSelection). I go into some detail on that topic here.
If you are intending to concurrently capture audio, you will have to build an aggregate source, which I wrote a bit about here;
Assuming that your capture card has fairly recent drivers, I am certain that you will locate and read from your available streams without much trouble. Good luck.
I'm writing program about Kinect skeleton track program.While the Definition of the gesture is written in the program. I want the definition of gesture to be defined by the user.One way of doing this is by DFA. I don't konw how to start with C#. Can any one help?
Try using Lists to store the coordinates of the skeleton's joints (kind of a buffer) and then you could maybe run a DFA. you could define your transitions as a range of coordinates for every direction , and the final state would be when the elements in the buffer are approximativily in the same area.
So in C# you will need to create a datatype to save the sequece of the gesture that will get updated when the user adds one.Lists for buffers as I stated below.
When saving the gesture your code will look like :
While(!Joint_stable && (i < buffer.count() ) )
{
While ((buffer.Joint.ElementAt(i+1).X-buffer.Joint.ElementAt(i)).X>0 && (buffer.Joint.ElementAt(i+1)-buffer.Joint.ElementAt(i).Y )>0 ) //Think about adding tolerence here
{
Gesture.add("Upper_Right");
}
...
}
Just an advice , the kinect sensor is not that accurate so try to establish a kind of tolerance.
I Hope that my answer will help you or at least give you some inspiration :)
I'm fairly new to wxWidgets so please bear with me. Let's say I have a 10Kx10K image and my wxScrolledWindow has a size of 640x480. I load the whole image into a wxBitmap which I use in my paint function.
Now in my OnPaint function I just say
wxPaintDC dc(this);
dc.DrawBitmap(_Bitmap, 0, 0 );
This somewhat works for the first few paints but soon the Window content is out order and artifacts appear. This happens very fast when I move a scroll bar back and forth very quickly.
I use the latest wxWidgets on a Windows 7 machine.
So, how can I improve my painting code?
Many thanks,
Christian
Using a 10000x10000 wxBitmap is a bad idea on its own, it may simply fail to be created on an older system (that's 400MiB of video RAM!). Drawing it entirely is sheer madness.
I don't know where does your data come from but in a typical case of e.g. a map to be shown on screen, you should break it into tiles, convert the tiles that are currently visible on screen to wxBitmap (or several of them) and draw only those.
Then you may optimize your drawing by using double buffering (which is relatively useless under Windows 7 that double buffers everything on its own) and otherwise, but you should be using a reasonably-sized backing store bitmap.
This sounds like something that might be helped by using double buffering.
The first thing to start trying is to replace wxPaintDC with wxBufferedPaintDC
For more suggestions, here is a wiki article on the subject: http://wiki.wxwidgets.org/Flicker-Free_Drawing
As Ravenspoint kindly pointed out, there is an article on wxWidgets' wiki. So according to that article two things need to happen. First override the EVT_ERASE_BACKGROUND with an empty function.
void Canvas::EraseBackground( wxEraseEvent& WXUNUSED(event))
{
}
And second to implement a basic double buffering scheme. Here is how I did it.
void Canvas::OnPaint(wxPaintEvent& WXUNUSED(event))
{
int x, y;
GetViewStart(&x, &y);
wxRect Client_Area = GetClientRect();
int width = Client_Area.width;
int height = Client_Area.height;
wxBitmap Current = _Bitmap.GetSubBitmap(wxRect( x * 10, y * 10, width, height ));
wxPaintDC dc(this);
dc.DrawBitmap(Current, 0, 0, false );
}
My scroll rate for both x and y is set to 10. That's why I multiply the view start coordinates.
Any more insight is very welcome.
Thanks,
Christian
For example, there are QR scanners which scan video stream in real time and get QR codes info.
I would like to check the light source from the video, if it is on or off, it is quite powerful so it is no problem.
I will probably take a video stream as input, maybe make images of it and analyze images or stream in real time for presence of light source (maybe number of pixels of certain color on the image?)
How do I approach this problem? Maybe there is some source of library?
It sounds like you are asking for information about several discreet steps. There are a multitude of ways to do each of them and if you get stuck on any individual step it would be a good idea to post a question about it individually.
1: Get video Frame
Like chaitanya.varanasi said, AVFoundation Framework is the best way of getting access to an video frame on IOS. If you want something less flexible and quicker try looking at open CV's video capture. The goal of this step is to get access to a pixel buffer from the camera. If you have trouble with this, ask about it specifically.
2: Put pixel buffer into OpenCV
This part is really easy. If you get it from openCV's video capture you are already done. If you get it from an AVFoundation you will need to put it into openCV like this
//Buffer is of type CVImageBufferRef, which is what AVFoundation should be giving you
//I assume it is BGRA or RGBA formatted, if it isn't, change CV_8UC4 to the appropriate format
CVPixelBufferLockBaseAddress( Buffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(Buffer);
int bufferHeight = CVPixelBufferGetHeight(Buffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(Buffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel); //put buffer in open cv, no memory copied
//Process image Here
//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
note I am assuming you plan to do this in OpenCV since you used its tag. Also I assume you can get the OpenCV framework to link to your project. If that is an issue, ask a specific question about it.
3: Process Image
This part is by far the most open ended. All you have said about your problem is that you are trying to detect a strong light source. One very quick and easy way of doing that would be to detect the mean pixel value in a greyscale image. If you get the image in colour you can convert with cvtColor. Then just call Avg on it to get the mean value. Hopefully you can tell if the light is on by how that value fluctuates.
chaitanya.varanasi suggested another option, you should check it out too.
openCV is a very large library that can do a wide wide variety of things. Without knowing more about your problem I don't know what else to tell you.
Look at the AVFoundation Framework from Apple.
Hope it helps!
You can try this method: start by getting all images to an AVCaptureVideoDataOutput. From the method:captureOutput:didOutputSampleBuffer:fromConnection,you can sample/calculate every pixel. Source: answer
Also, you can take a look at this SO question where they check if a pixel is black. If its such a powerful light source, you can take the inverse of the pixel and then determine using a set threshold for black.
The above sample code only provides access to the pixel values stored in the buffer; you cannot run any other commands but those that change those values on a pixel-by-pixel basis:
for ( uint32_t y = 0; y < height; y++ )
{
for ( uint32_t x = 0; x < width; x++ )
{
bgraImage.at<cv::Vec<uint8_t,4> >(y,x)[1] = 0;
}
}
This—to use your example—will not work with the code you provided:
cv::Mat bgraImage = cv::Mat( (int)height, (int)extendedWidth, CV_8UC4, base );
cv::Mat grey = bgraImage.clone();
cv::cvtColor(grey, grey, 44);
Please edvice by objective-c code snippets and useful links of how can I control all audio signals of output in OS X?
I think it should be something like proxy layer somewhere in OS X logic layers.
Thank you!
It's somewhat sad that there is no simple API to do this. Luckily it isn't too hard, just verbose.
First, get the system output device:
UInt32 size;
AudioDeviceID outputDevice;
OSStatus result = AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOuputDevice, &size, &outputDevice);
Then set the volume:
Float32 theVolume;
result = AudioDeviceSetProperty(theDevice, NULL, 0, /* master channel */ false, kAudioDevicePropertyVolumeScalar, sizeof(Float32), &theVolume);
Obviously I've omitted any error checking, which is a must.
Things can get a bit tricky because not all devices support channel 0 (the master channel). If this is the case with your device (it probably is) you have two options: query the device for its preferred stereo pair and set the volume on those channels individually, or just set the volume on channels 1 and 2.