How to programmatically start front camera of iPad? - objective-c

I would like to start front camera of the iPad when app starts.
How do I do it programmatically?
Please let me know.

First thing you need to do is to detect if your device has got front-facing camera. For that you need to iterate through the video devices.
Try this method of UIImagePickerController:
+ (BOOL)isCameraDeviceAvailable:(UIImagePickerControllerCameraDevice)cameraDevice
This is a class method and UIImagePickerControllerCameraDevice can take two values:
- UIImagePickerControllerCameraDeviceRear
- UIImagePickerControllerCameraDeviceFront
Example code:
if( [UIImagePickerController isCameraDeviceAvailable: UIImagePickerControllerCameraDeviceFront ])
{
// do something
}
Note that this is available for iOS 4.0 and later.
Also I am not sure if there is any API's to start the front-facing camera up front. The camera always seems to start in the same mode that the user left it the last time it was used. Maybe by design Apple did not expose any API's to change this. Maybe Apple wanted the users to make a call on this.
Nevertheless you can atleast detect the availability of Fron Camera & provide your feature.

If I understand your question correctly, all you have to do is open your Camera to be in Front Mode instead of Rear Mode, so write this inside the method where you call the picker for the first time:
picker.cameraDevice=UIImagePickerControllerCameraDeviceFront;
Hope this answers your question.

Related

Pull Single Frame from Video Feed (DJI Mobile SDK)

I am making a DJI Mobile SDK app and have setup an application that gets live video from the drone and displays it in a view, but I need to pull a single frame from the video feed to work with and cannot figure out how to do it!
One method would be to take a picture with the drone and then download it from the SD card, but I do not require the full resolution image and it feels like there must be a simple method to just get a single frame from the video preview.
The code which casts the video stream is:
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
}
any ideas on how to pull an individual from from the feed? Or maybe is there a way to have an iOS app just take a screenshot and work with that?
Thanks!
Im not very familiar with IOS. for android there is a sample which use DJI msdk to grab the still images and use the image for Panorama stitching https://github.com/DJI-Mobile-SDK-Tutorials/Android-PanoramaDemo.
The equivalent IOS version of Panorama stitching is here. https://github.com/DJI-Mobile-SDK-Tutorials/iOS-PanoramaDemo
Maybe you can get idea on how to grab the still image from there.
There are several threads about this in android.
Ios would not be different i think.
how to get bitmap data from drone camera stream. android application
Get the bitmap from the fpvWidget is by far the simpliest and fastest solution.
public Bitmap getFrameBitmap() {
return fpvWidget.getBitmap();
}

How to capture App Screen as Video with Audio in Mac OSX?

I am writing a MacOS or OSX application where I need to record only the View of my application (Not the Whole display) with the Audio it emits.
Think it as a game app and I need to record the complete GamePlay View Of the Application.How should I go about doing this?
I am aware of "AVCaptureScreenInput" and, the example. But how to capture only the view of my application?
From the website you posted:
Note: By default, AVCaptureScreenInput captures the entire screen. You may set its cropRect property to limit the capture rectangle to a subsection of the screen.
Just set this property to the windows/views rect and you're done
Of course you need to update and restart the recording when the windows/views rect changes.
Read the document carefully, there is a comment, about the displays:
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.

iOS: compare a slice of an image to library of options

I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!

AVPlayerLayer - ReProgramming the Wheel?

I'm currently using an AVPlayer, along with an AVPlayerLayer to play back some video. While playing back the video, I've registered for time updates every 30th of a second during the video. This is used to draw a graph of the acceleration at that point in the video, and have it update along with the video. The graph is using the CMTime from the video, so if I skip to a different portion of the video, the graph immediately represents that point in time in the video with no extra work.
Anywho, as far as I'm aware, if I want to get an interface similar to what the MediaPlayer framework offers, I'm going to have to do that myself.
What I'm wondering is, is there a way to use my AVPlayer with the MediaPlayer framework? (Not that I can see.) Or, is there a way to register for incremental time updates with the MediaPlayer framework.
My code, if anyone is interested, follows :
[moviePlayer addPeriodicTimeObserverForInterval: CMTimeMake(1, 30) queue: dispatch_queue_create("eventQueue", NULL) usingBlock: ^(CMTime time) {
loopCount = (int)(CMTimeGetSeconds(time) * 30);
if(loopCount < [dataPointArray count]) {
dispatch_sync(dispatch_get_main_queue(), ^{
[graphLayer setNeedsDisplay];
});
}
}];
Thanks!
If you're talking about the window chrome displayed by MPMoviePlayer then I'm afraid you are looking at creating this UI yourself.
AFAIK there is no way of achieving the timing behaviour you need using the MediaPlayer framework, which is very much a simple "play some media" framework. You're doing the right thing by using AVFoundation.
Which leaves you needing to create the UI yourself. My suggestion would be to start with a XIB file to create the general layout; toolbar at the top with a done button, a large view that represents a custom playback view (using your AVPlayerLayer) and a separate view to contain your controls.
You'll need to write some custom controller code to automatically show/hide the playback controls and toolbar as needed if you want to simulate the MPMoviePlayer UI.
You can use https://bitbucket.org/brentsimmons/ngmovieplayer as a starting point (if it existed at the time you asked).
From the project page: "Replicates much of the behavior of MPMoviePlayerViewController -- but uses AVFoundation."
You might want to look for AVSynchronizedLayer class. I don't think there's a lot in the official programming guide. You can find bits of info here and there: subfurther, Otter Software.
In O'Really Programming iOS 4 (or 5) there's also a short reference on how to let a square move/stop along a line in synch with the animation.
Another demo (not a lot of code) is shown during WWDC 2011 session Working with Media in AV Foundation.

Apple Magic Mouse Api

I just bought a Magic Mouse and I like it pretty much. But as a Mac Developer it's even cooler. But there's one problem: is there already an API available for it? I want to use it for one of my applications. For, example, detect the user's finger positions, swipe or stretch gestures etc...
Does anyone know if there's an API for it (and how to use it)?
The Magic Mouse does not use the NSTouch API. I have been experimenting with it and attempting to capture touch information. I've had no luck so far. The only touch method that is common to both the mouse and the trackpad is the swipeWithEvent: method. It is called for a two finger swipe on the device only.
It seems the touch input from the mouse is being interpreted somewhere else, then forwarded on to the public API. I have yet to find the private API that is actually doing the work.
get a look here: http://www.iphonesmartapps.org/aladino/?a=multitouch
there's a full working proof-of-concept using the CGEventPost method.
--
all the best!
I have not tested, but I would be shocked if it didn't use NSTouch. NSTouch is the API you use to interact with the multi-touch trackpads on current MacBook Pros (and the new MacBooks that came out this week). You can check out the LightTable sample project to see how it is used.
It is part of AppKit, but it is a Snow Leopard only API.
I messed around with the below app before getting my magic mouse. I was surprised to find that the app also tracked the multi touch points on the mouse.
There is a link in the comments to some source that gets the raw data similarly, but there is no source to this actual app.
http://lericson.blogg.se/code/2009/november/multitouch-on-unibody-macbooks.html