Preview image of map - objective-c

How can I make a preview icon like in the iPhone Map application?
Screenshot http://img835.imageshack.us/img835/1619/img0016x.png
Is there a function for this?

In iOS 7 use MKMap​Snapshotter. From NSHipster:
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.mapView.region;
options.size = self.mapView.frame.size;
options.scale = [[UIScreen mainScreen] scale];
NSURL *fileURL = [NSURL fileURLWithPath:#"path/to/snapshot.png"];
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;
NSData *data = UIImagePNGRepresentation(image);
[data writeToURL:fileURL atomically:YES];
}];
Before iOS7 do one of the following:
Add a live map and disable interaction.
Use the Google Static Maps API (explained below).
Create a MKMapView and render it to an image. I tried this (see previous edit) but couldn't get the tiles to load. I tried calling needsLayout and other methods but didn't work.
Apple's deal with Google is more reliable than a free API, but on the bright side, it is really simple. Documentation is at http://code.google.com/apis/maps/documentation/staticmaps/
This is the minimum amount of parameters for practical use:
http://maps.googleapis.com/maps/api/staticmap?center=40.416878,-3.703530&zoom=15&size=290x179&sensor=false
A summary of the parameters:
center: this can be an address, or a latitude,longitude pair with up to 6 decimals (more than 6 are ignored).
format: png8 (default), png24, git, jpg, jpg-baseline.
language: Use any language. Default will be used if requested wasn't available.
maptype: One of the following: roadmap, satellite, hybrid, terrain.
scale: 1,2,4. Use 2 to return twice the amount of pixels on retina displays. 4 is restricted to premium customers.
Non paid resolution limits are 640x640 for 1 and 2.
sensor: true or false. Mandatory. It indicates if the user is being located using a device.
size: size in pixels.
zoom: 0 (whole planet) to 21.
There is a few more to add polygons, custom markers, and style the map.
NSData* data = [NSData dataWithContentsOfURL:#"http://maps.googleap..."];
UIImage *img = [UIImage imageWithData:data];

You can use Google Maps Static API to generate an image instead of loading an entire UIMapView.

Related

AVCaptureSession resolution doesn't change with AVCaptureSessionPreset

I want to change the resolution of pictures I take with the camera on OS X with AV Foundation.
But even if I change the resolution of my AVCaptureSession, the output picture size doesn't change. I always have a 1280x720 picture.
I want a lower resolution because I use these pictures in a real time process and I want the program to be faster.
This is a sample of my code:
session = [[AVCaptureSession alloc] init];
if([session canSetSessionPreset:AVCaptureSessionPreset640x360]) {
[session setSessionPreset:AVCaptureSessionPreset640x360];
}
AVCaptureDeviceInput *device_input = [[AVCaptureDeviceInput alloc] initWithDevice:
[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] error:nil];
if([session canAddInput:device_input])
[session addInput:device_input];
still_image = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *output_settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[still_image setOutputSettings : output_settings];
[session addOutput:still_image];
What should I change in my code?
I have also run into this issue and found a solution that seems to work. For some reason on OS X, StillImageOutput breaks the capture session presets.
What I did was change the AVCaptureDevice's active format directly. Try this code right after you add your StillImageOutput to your Capture Session.
//Get a list of supported formats for the device
NSArray *supportedFormats = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] formats];
//Find the format closest to what you are looking for
// this is just one way of finding it
NSInteger desiredWidth = 640;
AVCaptureDeviceFormat *bestFormat;
for (AVCaptureDeviceFormat *format in supportedFormats) {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions((CMVideoFormatDescriptionRef)[format formatDescription]);
if (dimensions.width <= desiredWidth) {
bestFormat = format;
}
}
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] lockForConfiguration:nil];
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] setActiveFormat:bestFormat];
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] unlockForConfiguration];
It is possible that there are other ways of fixing this issue but this is fixed it for me.
So I had this problem also, but using the raw AVCaptureVideoDataOutput instead of JPG.
The issue is that the session presets Low/Medium/High actually affects the capture device in some ways, for example framerate, but it won't change the hardware capture resolution - it will always capture at 1280x720. The idea, I think, is that a normal Quicktime output device will figure this out and add a scaling step to 640x480 (for example) if the session preset is to Medium.
But when using the raw outputs, they won't care about the preset desired dimensions.
The solution, contrary to Apples documentation on videoSettings, is to add the requested dimensions to the videoSettings:
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:640], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithDouble:480], (id)kCVPixelBufferHeightKey,
[NSNumber numberWithInt:kCMPixelFormat_422YpCbCr8_yuvs], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
[captureoutput setVideoSettings:outputSettings];
I say contrary to Apples docs, because the docs say that the FormatTypeKey is the only key allowed here. But the bufferheight/width keys actually do work and are needed.

How to draw routes in maps , ios6?

I have been drawing routes in iOS5, but as the app has to be upgraded to iOS6, it does not allow to draw routes between coordinate locations . I have searched a lot, but ended up in vain. Can anyone help me on which api to use in iOS6 since google maps doesnt support anymore ???
EDIT : I have used Google iOS SDK previously to plot the routes which will auto take care of the polylines. As far as I have searched, i will be able to redirect to browser to show routes and navigation. But i need to draw this in-app [ iOS6 ]. Does iOS6 automatically does the routing, if so can u pls share some code to get a gist of it. This code sample I used it in iOS5 using google maps
// over lay map to draw route
routeOverlayView = [[UICRouteOverlayMapView alloc] initWithMapView:routeMapView];
diretions = [UICGDirections sharedDirections];
UICGDirectionsOptions *options = [[UICGDirectionsOptions alloc] init];
//setting travel mode to driving
options.travelMode = UICGTravelModeDriving;
[diretions loadWithStartPoint:startPoint endPoint:endPoint options:options];
// Overlay polylines
UICGPolyline *polyline = [directions polyline];
NSArray *routePoints = [polyline routePoints];
NSLog(#"routePoints %#",routePoints);
[routeOverlayView setRoutes:routePoints];
I did route in my application. I used The Google Geocoding API (used for iOS 6 only) https://developers.google.com/maps/documentation/geocoding/
Something like this:
- (void) sendRequestForLat: (CGFloat) lat lon: (CGFloat) lon
{
NSString* request;
NSInteger zoom = 12;
NSString* location = [NSString stringWithFormat: #"%f,%f", lat, lon];
NSString* daddr = [NSString stringWithFormat: #"%f,%f", myLat, myLon];
request = [NSString stringWithFormat: #"saddr=%#&daddr=%#&zoom=%i&directionsmode=walking", location, daddr, zoom];
NSString* typeMapsApp = isGoogleMapsAppPresent ? #"comgoogle" : #"";
NSString* urlString = [NSString stringWithFormat: #"%#maps://?%#", typeMapsApp, request];
[[UIApplication sharedApplication] openURL: [NSURL URLWithString: urlString]];
}
You're using a third party piece of code to do your route drawing, rather than using MapKit itself. The library you're using (which I found on GitHub here) hasn't been updated for two years.
I'd recommend you move away from that library to something more current. If you want you could consider using Google's iOS Map SDK, which supports iOS 6 (link here). You could combine this with Google's Directions API to draw polylines directly onto the map. Or you could stick with MKMapView, and again draw directly onto the map with polylines.
It's not that iOS has changed how it draws polylines onto a map - it's that the third-party code you're using is out of date.

iOS: Load PNG from disk, add PixelPerMeter properties to it, and save it to the Photo Album

Apparently with CGImages you can add kCGImagePropertyPNGXPixelsPerMeter and kCGImagePropertyPNGYPixelsPerMeter properties to the image and they will get added to the pHYs chunk of the png. But UIImageWriteToSavedPhotosAlbum doesn't directly accept a CGImage.
I've been trying to load the image as a UIImage, then create a CGImage from that, add the properties, and then create a new UIImage from that data and saving that to the photo album.
An image is getting written to the photo album, but without the PPM settings. I can verify this in the preview app on MacOS, or using ImageMagick's identify.
ObjC isn't my domain, so here's what I've cobbled together from similar stackoverflow questions.
UIImage *uiImage = [UIImage imageWithContentsOfFile:path];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:5905], (NSString *)kCGImagePropertyPNGXPixelsPerMeter, /* this doesn't work */
[NSNumber numberWithInt:5905], (NSString *)kCGImagePropertyPNGYPixelsPerMeter, /* this doesn't work */
#"This works when saved to disk.", (NSString *)kCGImagePropertyPNGDescription,
nil],(NSString*) kCGImagePropertyPNGDictionary,
nil];
NSMutableData* imageData = [NSMutableData data];
CGImageDestinationRef imageDest = CGImageDestinationCreateWithData((CFMutableDataRef) imageData, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(imageDest, uiImage.CGImage, (CFDictionaryRef) properties);
CGImageDestinationSetProperties(imageDest, (CFDictionaryRef) properties); /* I'm setting the properties twice because I was going crazy */
CGImageDestinationFinalize(imageDest);
UIImageWriteToSavedPhotosAlbum( [UIImage imageWithData:imageData], nil, NULL, NULL );
// This is a test to see if the data is getting stored at all.
CGImageDestinationRef diskDest = CGImageDestinationCreateWithURL(
(CFURLRef)[NSURL fileURLWithPath:[NSString stringWithFormat:#"%#.dpi.png",path]], kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(diskDest, uiImage.CGImage, (CFDictionaryRef) properties);
CGImageDestinationSetProperties(diskDest, (CFDictionaryRef) properties);
CGImageDestinationFinalize(diskDest);
Updated: Revised and simplified the code a bit. Still saves image to photo album and disk, but the pixels per meter value is not saving. The description block gets saved to disk, but appears to get stripped when written to the photo album.
Update 2: By saving the file as a jpeg and adding TIFF X/Y Resolution tags to it I've been able to get the version saved to disk to store a PPI value other than 72. This information is still stripped off when saving to the photo album.
Comparing a photo taken via iOS Camera app, one taken via Instagram, and one saved by my code, I noticed that the Camera app version has a ton of TIFF/EXIF tags, and the Instagram and my one have the same limited set of tags. This leads me to the conclusion that iOS strips these tags intentionally. A definitive answer would help me sleep at night though.
The answer is don't use UIImageWriteToSavedPhotosAlbum.
There's an ALAssetsLibrary method
[writeImageDataToSavedPhotosAlbum: metadata: completionBlock:]
The metadata is a dictionary in the same format as the one passed to CGImageDestinationSetProperties. PNG and JFIF density settings may be broken, I'm using kCGImagePropertyTIFFXResolution.

rotate CGImage on disk using CGImageSource/CGImageDestination?

I'm working on an application that needs to take a picture using UIImagePickerController, display a thumbnail of it in the app, and also submit that picture to a server using ASIFormDataRequest (from ASIHTTPRequest).
I want to use setFile:withFileName:andContentType:forKey: from ASIFormDataRequest since in my experience it's faster than trying to submit an image using UIImageJPEGRepresentation and submitting raw NSData. To that end I'm using CGImageDestination and creating an image destination with a url, saving that image to disk, and then uploading that file on disk.
In order to create the thumbnail I'm using CGImageSourceCreateThumbnailAtIndex (see docs) and creating an image source with the path of the file I just saved.
My problem is that no matter what options I pass into the image destination or the thumbnail creation call, my thumbnail always comes out rotated 90 degrees counterclockwise. The uploaded image is also rotated. I've tried explicitly setting the orientation in the options of the image using CGImageDestinationSetProperties but it doesn't seem to take. The only solution I've found is to rotate the image in memory, but I really want to avoid that since doing so doubles the time it takes for the thumbnail+saving operation to finish. Ideally I'd like to be able to rotate the image on disk.
Am I missing something in how I'm using CGImageDestination or CGImageSource? I'll post some code below.
Saving the image:
NSURL *filePath = [NSURL fileURLWithPath:self.imagePath];
CGImageRef imageRef = [self.image CGImage];
CGImageDestinationRef ref = CGImageDestinationCreateWithURL((CFURLRef)filePath, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(ref, imageRef, NULL);
NSDictionary *props = [[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
nil] retain];
//Note that setting kCGImagePropertyOrientation didn't work here for me
CGImageDestinationSetProperties(ref, (CFDictionaryRef) props);
CGImageDestinationFinalize(ref);
CFRelease(ref);
Generating the thumbnail
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)filePath, NULL);
if (!imageSource)
return;
CFDictionaryRef options = (CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
(id)[NSNumber numberWithFloat:THUMBNAIL_SIDE_LENGTH], (id)kCGImageSourceThumbnailMaxPixelSize, nil];
CGImageRef imgRef = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options);
UIImage *thumb = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CFRelease(imageSource);
And then to upload the image I just use
[request setFile:path withFileName:fileName andContentType:contentType forKey:#"photo"];
where path is the path to the file saved with the code above.
As far as I know and after trying lots of different things, this cannot be done with current public APIs and has to be done in memory.

How to get [UIImage imageWithContentsOfFile:] and High Res Images working

As many people are complaining it seems that in the Apple SDK for the Retina Display there's a bug and imageWithContentsOfFile actually does not automatically load the 2x images.
I've stumbled into a nice post how to make a function which detects UIScreen scale factor and properly loads low or high res images ( http://atastypixel.com/blog/uiimage-resolution-independence-and-the-iphone-4s-retina-display/ ), but the solution loads a 2x image and still has the scale factor of the image set to 1.0 and this results to a 2x images scaled 2 times (so, 4 times bigger than what it has to look like)
imageNamed seems to accurately load low and high res images, but is no option for me.
Does anybody have a solution for loading low/high res images not using the automatic loading of imageNamed or imageWithContentsOfFile ? (Or eventually solution how to make imageWithContentsOfFile work correct)
Ok, actual solution found by Michael here :
http://atastypixel.com/blog/uiimage-resolution-independence-and-the-iphone-4s-retina-display/
He figured out that UIImage has the method "initWithCGImage" which also takes a scale factor as input (I guess the only method where you can set yourself the scale factor)
[UIImage initWithCGImage:scale:orientation:]
And this seems to work great, you can custom load your high res images and just set that the scale factor is 2.0
The problem with imageWithContentsOfFile is that since it currently does not work properly, we can't trust it even when it's fixed (because some users will still have an older iOS on their devices)
We just ran into this here at work.
Here is my work-around that seems to hold water:
NSString *imgFile = ...path to your file;
NSData *imgData = [[NSData alloc] initWithContentsOfFile:imgFile];
UIImage *img = [[UIImage alloc] initWithData:imgData];
imageWithContentsOfFile works properly (considering #2x images with correct scale) starting iOS 4.1 and onwards.
Enhancing Lisa Rossellis's answer to keep retina images at desired size (not scaling them up):
NSString *imagePath = ...Path to your image
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfFile:imagePath] scale:[UIScreen mainScreen].scale];
I've developed a drop-in workaround for this problem.
It uses method swizzling to replace the behavior of the "imageWithContentsOfFile:" method of UIImage.
It works fine on iPhones/iPods pre/post retina.
Not sure about the iPad.
Hope this is of help.
#import </usr/include/objc/objc-class.h>
#implementation NSString(LoadHighDef)
/** If self is the path to an image, returns the nominal path to the high-res variant of that image */
-(NSString*) stringByInsertingHighResPathModifier {
NSString *path = [self stringByDeletingPathExtension];
// We determine whether a device modifier is present, and in case it is, where is
// the "split position" at which the "#2x" token is to be added
NSArray *deviceModifiers = [NSArray arrayWithObjects:#"~iphone", #"~ipad", nil];
NSInteger splitIdx = [path length];
for (NSString *modifier in deviceModifiers) {
if ([path hasSuffix:modifier]) {
splitIdx -= [modifier length];
break;
}
}
// We insert the "#2x" token in the string at the proper position; if no
// device modifier is present the token is added at the end of the string
NSString *highDefPath = [NSString stringWithFormat:#"%##2x%#",[path substringToIndex:splitIdx], [path substringFromIndex:splitIdx]];
// We possibly add the extension, if there is any extension at all
NSString *ext = [self pathExtension];
return [ext length]>0? [highDefPath stringByAppendingPathExtension:ext] : highDefPath;
}
#end
#implementation UIImage (LoadHighDef)
/* Upon loading this category, the implementation of "imageWithContentsOfFile:" is exchanged with the implementation
* of our custom "imageWithContentsOfFile_custom:" method, whereby we replace and fix the behavior of the system selector. */
+(void)load {
Method originalMethod = class_getClassMethod([UIImage class], #selector(imageWithContentsOfFile:));
Method replacementMethod = class_getClassMethod([UIImage class], #selector(imageWithContentsOfFile_custom:));
method_exchangeImplementations(replacementMethod, originalMethod);
}
/** This method works just like the system "imageWithContentsOfFile:", but it loads the high-res version of the image
* instead of the default one in case the device's screen is high-res and the high-res variant of the image is present.
*
* We assume that the original "imageWithContentsOfFile:" implementation properly sets the "scale" factor upon
* loading a "#2x" image . (this is its behavior as of OS 4.0.1).
*
* Note: The "imageWithContentsOfFile_custom:" invocations in this code are not recursive calls by virtue of
* method swizzling. In fact, the original UIImage implementation of "imageWithContentsOfFile:" gets called.
*/
+ (UIImage*) imageWithContentsOfFile_custom:(NSString*)imgName {
// If high-res is supported by the device...
UIScreen *screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)] && [screen scale]>=2.0) {
// then we look for the high-res version of the image first
UIImage *hiDefImg = [UIImage imageWithContentsOfFile_custom:[imgName stringByInsertingHighResPathModifier]];
// If such high-res version exists, we return it
// The scale factor will be correctly set because once you give imageWithContentsOfFile:
// the full hi-res path it properly takes it into account
if (hiDefImg!=nil)
return hiDefImg;
}
// If the device does not support high-res of it does but there is
// no high-res variant of imgName, we return the base version
return [UIImage imageWithContentsOfFile_custom:imgName];
}
#end
[UIImage imageWithContentsOfFile:] doesn't load #2x graphics if you specify an absolute path.
Here is a solution:
- (UIImage *)loadRetinaImageIfAvailable:(NSString *)path {
NSString *retinaPath = [[path stringByDeletingLastPathComponent] stringByAppendingPathComponent:[NSString stringWithFormat:#"%##2x.%#", [[path lastPathComponent] stringByDeletingPathExtension], [path pathExtension]]];
if( [UIScreen mainScreen].scale == 2.0 && [[NSFileManager defaultManager] fileExistsAtPath:retinaPath] == YES)
return [[[UIImage alloc] initWithCGImage:[[UIImage imageWithData:[NSData dataWithContentsOfFile:retinaPath]] CGImage] scale:2.0 orientation:UIImageOrientationUp] autorelease];
else
return [UIImage imageWithContentsOfFile:path];
}
Credit goes to Christof Dorner for his simple solution (which I modified and pasted here).