Pull Single Frame from Video Feed (DJI Mobile SDK) - objective-c

I am making a DJI Mobile SDK app and have setup an application that gets live video from the drone and displays it in a view, but I need to pull a single frame from the video feed to work with and cannot figure out how to do it!
One method would be to take a picture with the drone and then download it from the SD card, but I do not require the full resolution image and it feels like there must be a simple method to just get a single frame from the video preview.
The code which casts the video stream is:
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
}
any ideas on how to pull an individual from from the feed? Or maybe is there a way to have an iOS app just take a screenshot and work with that?
Thanks!

Im not very familiar with IOS. for android there is a sample which use DJI msdk to grab the still images and use the image for Panorama stitching https://github.com/DJI-Mobile-SDK-Tutorials/Android-PanoramaDemo.
The equivalent IOS version of Panorama stitching is here. https://github.com/DJI-Mobile-SDK-Tutorials/iOS-PanoramaDemo
Maybe you can get idea on how to grab the still image from there.

There are several threads about this in android.
Ios would not be different i think.
how to get bitmap data from drone camera stream. android application
Get the bitmap from the fpvWidget is by far the simpliest and fastest solution.
public Bitmap getFrameBitmap() {
return fpvWidget.getBitmap();
}

Related

Diffrent results for the image and screenshot of the image

I am using an object localizer with react native image picker to get coordinates of objects within an image. When I send the image by taking a photo with the android device the results I get are not accurate but when I take the screenshot of the photo and send it the results are almost perfect. Why might this be the case and how can I fix it?
The interesting thing is when I use the android studio emulator and send photos without taking screenshots of them the results are correct too. I have read that there are recommended image sizes for these operations however I could not find one for the object localizer.
Edit: I have found that when I take a screen shot the image resolution is equal to my devices width and height however when I take photo it uses cameras resolution.To give an example right now when I take a photo its resolution is 4032x2268 and resolution of said images screen shot is 1080x2220 which is the resolution I use for my android device.İs there any way to set cameras resolution to same as devices resolution?

Thumbnail MKMapView without Google Logo

I am in the process of developing a thumbnail MKMapView to show a singular point on the map. However, as the thumbnail is only 70x61px, the google logo takes up a large proportion of the map.
Please can you tell me a way of using the MKMapView so that the google logo is less visible or can't been seen, but avoiding app rejection, or any alternatives to using the MKMapView?
Thanks in advanced.
How it looks at the moment:
Have you looking into the Google Maps Static API? It returns regular jpeg maps rather than interactive ones. You might be able to craft a URL that gets you a small enough image for your thumbnail. I don't know whether that would be ok according to their license or not.
Start developing with the iOS 6 beta. There are significant changes to MapKit that removes Google as the data provider (and thus their logo). The final version of iOS 6 and it's SDK will be released in the next couple of weeks. So you will also be good to go submitting an iOS 6 app soon.

360 degree video in MPMoviePlayerController

I am trying to develop an iphone application which needs to show a 360 degree video like the one and rotate the video as per the phone movement. How can i do this? Is it possible to do this with normal MPMovieplayer controller?
I don't think you can do this with a normal MPMoviePlayerController, but there are several libraries out there to achieve this. Have a look here:
PanoramaGL
Panorama 360
They work with OpenGL and you can embed them in your Objective-C code.
EDIT:
As #Mangesh Vyas kindly pointed out those are intended to use with fixed images only. However they might be a suitable starting point for embedding video as well, if you modify the code accordingly. They already do the handling of direction, accelerometer etc. so you don't have to implement all that yourself.

Modifying video frames with QTKit and OpenGL

I am working on a project where I would like to open a video (on a Mac) with QTKit. That part I can do no problem, but as I am playing it, I would like to edit or modify the video on the fly using OpenGL.
From what I understand, I should be able to intercept the frames and change them before it hits the display, but no matter what I do, I cannot seem to do so.
It sounds like you should have a look at Core Video and the display link mechanic.
You can basically get a callback on a high priority thread with the decoded frame in a CVImageBuffer and do whatever you like with it (including packing it up as a texture for OpenGL processing and display).
Apple provides documentation and demo code snippets on the developer sites.

iOS: compare a slice of an image to library of options

I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!