I am in the process of developing a thumbnail MKMapView to show a singular point on the map. However, as the thumbnail is only 70x61px, the google logo takes up a large proportion of the map.
Please can you tell me a way of using the MKMapView so that the google logo is less visible or can't been seen, but avoiding app rejection, or any alternatives to using the MKMapView?
Thanks in advanced.
How it looks at the moment:
Have you looking into the Google Maps Static API? It returns regular jpeg maps rather than interactive ones. You might be able to craft a URL that gets you a small enough image for your thumbnail. I don't know whether that would be ok according to their license or not.
Start developing with the iOS 6 beta. There are significant changes to MapKit that removes Google as the data provider (and thus their logo). The final version of iOS 6 and it's SDK will be released in the next couple of weeks. So you will also be good to go submitting an iOS 6 app soon.
Related
I am making a DJI Mobile SDK app and have setup an application that gets live video from the drone and displays it in a view, but I need to pull a single frame from the video feed to work with and cannot figure out how to do it!
One method would be to take a picture with the drone and then download it from the SD card, but I do not require the full resolution image and it feels like there must be a simple method to just get a single frame from the video preview.
The code which casts the video stream is:
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
}
any ideas on how to pull an individual from from the feed? Or maybe is there a way to have an iOS app just take a screenshot and work with that?
Thanks!
Im not very familiar with IOS. for android there is a sample which use DJI msdk to grab the still images and use the image for Panorama stitching https://github.com/DJI-Mobile-SDK-Tutorials/Android-PanoramaDemo.
The equivalent IOS version of Panorama stitching is here. https://github.com/DJI-Mobile-SDK-Tutorials/iOS-PanoramaDemo
Maybe you can get idea on how to grab the still image from there.
There are several threads about this in android.
Ios would not be different i think.
how to get bitmap data from drone camera stream. android application
Get the bitmap from the fpvWidget is by far the simpliest and fastest solution.
public Bitmap getFrameBitmap() {
return fpvWidget.getBitmap();
}
Can anyone suggest the best way to implement the offline map with following features.
Add MKOverlayView with local static Map image
Restrict to zoom outside the MKOverlay area
Google Map should not appear on the screen
Add multiple annotations of some fixed locations
Tracking & rotating
I have used Mapkit framework to start of with & already added MKOverlayView in it. Now while adding few fixed annotations it doesn't allow to add without internet.
I don't think all above things can be achieved only using Mapkit framework. So can anyone suggest me the exact solution for it ?
Any suggestions of hints will be appreciated.
Thanks.
It is possible to make map kit load map contents from a private map database. I don't remember if this is new in iOS 6 or 7. I want to say iOS 7. There was a WWDC session where an Apple engineer set up a private map as a demo.
Usually you'd host the map on a server and have your app download map tiles from the server. In your case you'd have it load tiles from a local directory on the device. However, map content gets big fast. You'd only be able to cover fairly modest areas before the file sizes of your map content became prohibitively large.
I would recommend that you look into MBXMapKit:
http://mapbox.com/mbxmapkit
I'm investigating if it's possible to implement the same functionalities of ZBar library with iOS 7 api.
Everything was good so far thanks to this tutorial.
However, I now want to have a green box shown on the screen whenever the camera detects a QRCode. The green box is supposed to wrap around the QRCode.
From the delegate of AVCaptureMetadataOutput, I can grab AVMetadataObject but the bounds getting from this object is always very small which is not correct, given the fact that my QRCode is very big on the screen.
Anyone has any ideas on how to achieve the green focusing box?
P/S: I came across the documentation and couldn't understand this line "If the metadata originates from video, bounds
may be expressed as scalar values from 0. - 1.". This is for the bounds property of AVMetadataObject
You can look at this tutorial for qrCode scanning using iOS 7.
I had to do the same thing in my scanner app. Here is a link that I found very useful and pretty much answered all my questions.
He goes step by step from setting up the scanner to adding the bounding box.
I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!
Actually I am adding Image Processing Feature in my iPhone Application It should do Brightness, Contrast, Sharpen, Exposure....
But i am not able to Find any article/Tutorial on the Internet. Will you please help me to find any tutorial or tell me how can i implement the iPhone View Based Application.
I have found 1 link http://www.iphonedevsdk.com/forum/iphone-sdk-development/10094-adjust-image-brightness-contrast-fly.html its worked also for Brightness but its not working on iPad.
So Suggest something that i can start with my Image Processing Logic.
Thanks
Rick Jackson
I personally like the approach in the GLImageProcessing project from Apple's sample code. Check it out.
There are a few libraries that support image processing in Quartz. There are even a few categories on UIImage to do some basic stuff.
The following are a few examples:
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions
http://code.google.com/p/simple-iphone-image-processing/
But as said before by #Felz those libraries are slow because they use the quartz codebase, which isn't that fast (for example: changing the saturation of an image with a resolution of 1024x1024 might take up to 4 to 8 seconds, depending on which device your using).
If your project is iOS 5 or higher then you should definitely consider using CoreImage
You can try GPUImage framework created by Brad Larson. It includes awesome image filters and also easy to use.