Draw route direction between user's current location to destination in iOS App? [duplicate] - objective-c

This question already has answers here:
Drawing Route Between Two Places on GMSMapView in iOS
(12 answers)
Closed 5 years ago.
I want to draw route direction from current location to destination using google map direction API.

Try this code Swift 3.0-
var lat = #your latitude#
var lng = #your longitude#
var location = "your title"
let camera = GMSCameraPosition.camera(withLatitude: lat, longitude: lng, zoom: 9.0)
var mapView:GMSMapView = GMSMapView.map(withFrame: CGRect(origin: CGPoint(x: 0, y: 0), size: view.frame.size), camera: camera)
view.addSubview(mapView)
let markerStart = GMSMarker()
markerStart.position = CLLocationCoordinate2D(latitude: lat, longitude: lng)
markerStart.title = location
markerStart.snippet = location
markerStart.map = mapView
var latEnd = #your end latitude#
var lngEnd = #your end longitude#
let markerEnd = GMSMarker()
markerEnd.position = CLLocationCoordinate2D(latitude: latEnd, longitude: lngEnd)
markerEnd.title = location
markerEnd.snippet = location
markerEnd.map = mapView
let bounds = GMSCoordinateBounds(coordinate: markerStart.position, coordinate: markerEnd.position)
let boundUpdate = GMSCameraUpdate.fit(bounds, withPadding: 40)
mapView.animate(with: boundUpdate)
drawPath(currentLocation: markerStart.position, destinationLoc: markerEnd.position)
func drawPath(currentLocation:CLLocationCoordinate2D,destinationLoc:CLLocationCoordinate2D)
{
let origin = "\(currentLocation.latitude),\(currentLocation.longitude)"
let destination = "\(destinationLoc.latitude),\(destinationLoc.longitude)"
let url = "https://maps.googleapis.com/maps/api/directions/json?origin=\(origin)&destination=\(destination)&mode=driving"
Alamofire.request(url).responseJSON { response in
let json = JSON(data: response.data!)
let routes = json["routes"].arrayValue
for route in routes
{
let routeOverviewPolyline = route["overview_polyline"].dictionary
let points = routeOverviewPolyline?["points"]?.stringValue
let path = GMSPath.init(fromEncodedPath: points!)
let polyline = GMSPolyline.init(path: path)
polyline.strokeColor = UIColor.red
polyline.strokeWidth = 3
polyline.map = self.mapView
}
}
}
Objective C
double lat = #your latitude#;
double lng = #your longitude#;
double latEnd = #your end latitude#;
double lngEnd = #your end longitude#;
NSString *directionsUrlString = [NSString stringWithFormat:#"https://maps.googleapis.com/maps/api/directions/json?&origin=%f,%f&destination=%f,%f&mode=driving", lat, lng, latEnd, lngEnd];
NSURL *directionsUrl = [NSURL URLWithString:directionsUrlString];
NSURLSessionDataTask *mapTask = [[NSURLSession sharedSession] dataTaskWithURL:directionsUrl completionHandler:
^(NSData *data, NSURLResponse *response, NSError *error)
{
NSDictionary *json = [NSJSONSerialization JSONObjectWithData:data options:kNilOptions error:&error];
if(error)
{
if(completionHandler)
completionHandler(nil);
return;
}
NSArray *routesArray = [json objectForKey:#"routes"];
GMSPolyline *polyline = nil;
if ([routesArray count] > 0)
{
NSDictionary *routeDict = [routesArray objectAtIndex:0];
NSDictionary *routeOverviewPolyline = [routeDict objectForKey:#"overview_polyline"];
NSString *points = [routeOverviewPolyline objectForKey:#"points"];
GMSPath *path = [GMSPath pathFromEncodedPath:points];
polyline = [GMSPolyline polylineWithPath:path];
}
// run completionHandler on main thread
dispatch_sync(dispatch_get_main_queue(), ^{
if(completionHandler)
completionHandler(polyline);
});
}];
[mapTask resume];
}

Related

How to access data "deep" inside a NSDictionary?

I am getting a JSON response from a web service as follows into a NSDictionary
NSDictionary *fetchAllCollectionsJSONResponse = [NSJSONSerialization JSONObjectWithData:data
options:0
error:NULL];
If I dump the output of the NSDictionary it looks correct like this
2017-10-06 10:11:46.097698+0800 NWMobileTill[396:33294] +[ShopifyWebServices fetchAllCollections]_block_invoke, {
data = {
shop = {
collections = {
edges = (
{
cursor = "eyJsYXN0X2lkIjo0NTI4NTY3MTcsImxhc3RfdmFsdWUiOiI0NTI4NTY3MTcifQ==";
node = {
description = "";
id = "Z2lkOi8vc2hvcGlmeS9Db2xsZWN0aW9uLzQ1Mjg1NjcxNw==";
};
},
{
cursor = "eyJsYXN0X2lkIjo0NTI4NTkwODUsImxhc3RfdmFsdWUiOiI0NTI4NTkwODUifQ==";
node = {
description = "Test Collection 1";
id = "Z2lkOi8vc2hvcGlmeS9Db2xsZWN0aW9uLzQ1Mjg1OTA4NQ==";
};
},
{
cursor = "eyJsYXN0X2lkIjo0NTU0OTMwMDUsImxhc3RfdmFsdWUiOiI0NTU0OTMwMDUifQ==";
node = {
description = Sovrum;
id = "Z2lkOi8vc2hvcGlmeS9Db2xsZWN0aW9uLzQ1NTQ5MzAwNQ==";
};
},
{
cursor = "eyJsYXN0X2lkIjo0NTU0OTMzODksImxhc3RfdmFsdWUiOiI0NTU0OTMzODkifQ==";
node = {
description = Badrum;
id = "Z2lkOi8vc2hvcGlmeS9Db2xsZWN0aW9uLzQ1NTQ5MzM4OQ==";
};
}
);
pageInfo = {
hasNextPage = 0;
};
};
};
};
}
I need to access the "description" attribute deep inside this structure and I cannot figure out how to do it.
I tried the following but it crashes
for (NSDictionary *dictionary in fetchAllCollectionsJSONResponse) {
NSLog(#"jongel %#", [dictionary objectForKey:#"data"]);
}
#Bilal's answer is right. This might be a bit easier to read:
NSArray *edges = fetchAllCollectionsJSONResponse[#"data"][#"shop"][#"collections"][#"edges"];
for (NSDictionary *edge in edges) {
NSString *description = edge[#"node"][#"description"];
NSLog(#"description = %#", description);
}
fetchAllCollectionsJSONResponse is a Dictionary not an Array. Try this.
NSDictionary *fetchAllCollectionsJSONResponse = nil;
NSDictionary *data = fetchAllCollectionsJSONResponse[#"data"];
NSDictionary *shop = fetchAllCollectionsJSONResponse[#"shop"];
NSDictionary *collections = fetchAllCollectionsJSONResponse[#"collections"];
NSArray *edges = fetchAllCollectionsJSONResponse[#"edges"];
// Or a shorter version
// NSArray *edges = fetchAllCollectionsJSONResponse[#"data"][#"shop"][#"collections"][#"edges"];
for (NSDictionary *edge in edges) {
NSString *cursor = edge[#"cursor"];
NSDictionary *node = edge[#"node"];
}

How to get Image size from URL in ios

How can I get the size(height/width) of an image from URL in objective-C? I want my container size according to the image. I am using AFNetworking 3.0.
I could use SDWebImage if it fulfills my requirement.
Knowing the size of an image before actually loading it can be necessary in a number of cases. For example, setting the height of a tableView cell in the heightForRowAtIndexPath method while loading the actual image later in the cellForRowAtIndexPath (this is a very frequent catch 22).
One simple way to do it, is to read the image header from the server URL using the Image I/O interface:
#import <ImageIO/ImageIO.h>
NSMutableString *imageURL = [NSMutableString stringWithFormat:#"http://www.myimageurl.com/image.png"];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL URLWithString:imageURL], NULL);
NSDictionary* imageHeader = (__bridge NSDictionary*) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSLog(#"Image header %#",imageHeader);
NSLog(#"PixelHeight %#",[imageHeader objectForKey:#"PixelHeight"]);
Swift 4.x
Xcode 12.x
func sizeOfImageAt(url: URL) -> CGSize? {
// with CGImageSource we avoid loading the whole image into memory
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return nil
}
let propertiesOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, propertiesOptions) as? [CFString: Any] else {
return nil
}
if let width = properties[kCGImagePropertyPixelWidth] as? CGFloat,
let height = properties[kCGImagePropertyPixelHeight] as? CGFloat {
return CGSize(width: width, height: height)
} else {
return nil
}
}
Use Asynchronous mechanism called GCD in iOS to dowload image without affecting your main thread.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Download IMAGE using URL
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
// COMPOSE IMAGE FROM NSData
UIImage *image = [[UIImage alloc]initWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
// UI UPDATION ON MAIN THREAD
// Calcualte height & width of image
CGFloat height = image.size.height;
CGFloat width = image.size.width;
});
});
For Swift 4 use this:
let imageURL = URL(string: post.imageBigPath)!
let source = CGImageSourceCreateWithURL(imageURL as CFURL,
let imageHeader = CGImageSourceCopyPropertiesAtIndex(source!, 0, nil)! as NSDictionary;
print("Image header: \(imageHeader)")
The header would looks like:
Image header: {
ColorModel = RGB;
Depth = 8;
PixelHeight = 640;
PixelWidth = 640;
"{Exif}" = {
PixelXDimension = 360;
PixelYDimension = 360;
};
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
0,
1
);
XDensity = 72;
YDensity = 72;
};
"{TIFF}" = {
Orientation = 0;
}; }
So u can get from it the Width, Height.
you can try like this:
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
UIImage *image = [[UIImage alloc]initWithData:data];
CGFloat height = image.size.height;
CGFloat width = image.size.width;

Attach array of photos with mailcore

I need to attach an array of photos to an email using MailCore. I found the following code in another question, however I'm not sure how to apply this to the photos taken with the camera in my app.
My guess is that I have to get the filename of the photos and save them as strings to an array, and then attach the strings to the email by using the snippet of code.
NSArray *allAttachments = [NSArray arrayWithObjects:#{#"FilePathOnDevice": #"/var/mobile/etc..", #"FileTitle": #"IMG_0522.JPG"}, nil];
for (int x = 0; x < allAttachments.count; x++) {
NSString *attachmentPath = [[allAttachments objectAtIndex:x] valueForKey:#"FilePathOnDevice"]];
MCOAttachment *attachment = [MCOAttachment attachmentWithContentsOfFile:attachmentPath];
[msgBuilder addAttachment:attachment];
}
This is how I'm getting the photo using the camera
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [NSObject : AnyObject]){
imagePicker.sourceType = .Camera
imagePicker.dismissViewControllerAnimated(true, completion: nil)
var photo = info[UIImagePickerControllerOriginalImage] as? UIImage
bedroomCells[lastSelectedIndex!.row].image = photo
tableView.reloadData()
}
Thank you for your help!
I ended up saving the image to the documents directory and using that file path to attach the image to mailcore.
Here is my code for the image picker:
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [NSObject : AnyObject]){
imagePicker.sourceType = .Camera
imagePicker.dismissViewControllerAnimated(true, completion: nil)
var photo = info[UIImagePickerControllerOriginalImage] as? UIImage
bedroomCells[lastSelectedIndex!.row].image = photo
var photoLabel = bedroomCells[lastSelectedIndex!.row].text
tableView.reloadData()
//save photo to document directory
var imageData = UIImageJPEGRepresentation(photo, 1.0)
var imageFilePath = fileDirectory[0].stringByAppendingPathComponent("\(photoLabel!).jpg")
var imageFileURL = NSURL(fileURLWithPath: imageFilePath)
imageData.writeToURL(imageFileURL!, atomically: false)
}
and the code for my mailcore method:
MCOMessageBuilder * builder = [[MCOMessageBuilder alloc] init];
NSString *documentsDirectory = [fileDirectory objectAtIndex:0];
NSArray *allAttachments = [[NSFileManager defaultManager] subpathsOfDirectoryAtPath:documentsDirectory error:nil];
for (int x = 0; x < allAttachments.count; x++) {
NSString *attachmentPath = [documentsDirectory stringByAppendingPathComponent:[allAttachments objectAtIndex:x]];
MCOAttachment *attachment = [MCOAttachment attachmentWithContentsOfFile:attachmentPath];
[builder addAttachment:attachment];
}
In swift code:
var dataImage: NSData?
dataImage = UIImageJPEGRepresentation(image, 0.6)!
var attachment = MCOAttachment()
attachment.mimeType = "image/jpg"
attachment.filename = "image.jpg"
attachment.data = dataImage
builder.addAttachment(attachment)

How to get lat and long coordinates from address string

I have a MKMapView that has a UISearchBar on the top, and I want the user to be able to type a address, and to find that address and drop a pin on it. What I don't know is how to turn the address string into longitude and latitude, so I can make a CLLocation object. Does anyone know how I can do this?
You may find your answer in this question.
iOS - MKMapView place annotation by using address instead of lat / long By User Romes.
NSString *location = #"some address, state, and zip";
CLGeocoder *geocoder = [[CLGeocoder alloc] init];
[geocoder geocodeAddressString:location
completionHandler:^(NSArray* placemarks, NSError* error){
if (placemarks && placemarks.count > 0) {
CLPlacemark *topResult = [placemarks objectAtIndex:0];
MKPlacemark *placemark = [[MKPlacemark alloc] initWithPlacemark:topResult];
MKCoordinateRegion region = self.mapView.region;
region.center = placemark.region.center;
region.span.longitudeDelta /= 8.0;
region.span.latitudeDelta /= 8.0;
[self.mapView setRegion:region animated:YES];
[self.mapView addAnnotation:placemark];
}
}
];
A Very simple solution. But only applicable on iOS5.1 or later.
I used a similar approach like Vijay, but had to adjust one line of code. region.center = placemark.region.center didn't work for me. Maybe my code helps someone as well:
let location: String = "1 Infinite Loop, CA, USA"
let geocoder: CLGeocoder = CLGeocoder()
geocoder.geocodeAddressString(location,completionHandler: {(placemarks: [CLPlacemark]?, error: NSError?) -> Void in
if (placemarks?.count > 0) {
let topResult: CLPlacemark = (placemarks?[0])!
let placemark: MKPlacemark = MKPlacemark(placemark: topResult)
var region: MKCoordinateRegion = self.mapView.region
region.center.latitude = (placemark.location?.coordinate.latitude)!
region.center.longitude = (placemark.location?.coordinate.longitude)!
region.span = MKCoordinateSpanMake(0.5, 0.5)
self.mapView.setRegion(region, animated: true)
self.mapView.addAnnotation(placemark)
}
})
For swift2
var location: String = "some address, state, and zip"
var geocoder: CLGeocoder = CLGeocoder()
geocoder.geocodeAddressString(location,completionHandler: {(placemarks: [CLPlacemark]?, error: NSError?) -> Void in
if (placemarks?.count > 0) {
var topResult: CLPlacemark = (placemarks?[0])!
var placemark: MKPlacemark = MKPlacemark(placemark: topResult)
var region: MKCoordinateRegion = self.mapView.region
region.center = placemark.region.center
region.span.longitudeDelta /= 8.0
region.span.latitudeDelta /= 8.0
self.mapView.setRegion(region, animated: true)
self.mapView.addAnnotation(placemark)
}
})
func geoCodeUsingAddress(address: NSString) -> CLLocationCoordinate2D {
var latitude: Double = 0
var longitude: Double = 0
let addressstr : NSString = "http://maps.google.com/maps/api/geocode/json?sensor=false&address=\(address)" as NSString
let urlStr = addressstr.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed)
let searchURL: NSURL = NSURL(string: urlStr! as String)!
do {
let newdata = try Data(contentsOf: searchURL as URL)
if let responseDictionary = try JSONSerialization.jsonObject(with: newdata, options: []) as? NSDictionary {
print(responseDictionary)
let array = responseDictionary.object(forKey: "results") as! NSArray
let dic = array[0] as! NSDictionary
let locationDic = (dic.object(forKey: "geometry") as! NSDictionary).object(forKey: "location") as! NSDictionary
latitude = locationDic.object(forKey: "lat") as! Double
longitude = locationDic.object(forKey: "lng") as! Double
} catch {
}
}
var center = CLLocationCoordinate2D()
center.latitude = latitude
center.longitude = longitude
return center
}

Create a thumbnail or image of an AVPlayer at current time

I have implemented an AVPlayer and i want to take an image or thumbnail when clicking on a toolbar button and open in a new UIViewController with UIImageView. The image should be scaled exactly like the AVPlayer.
The segue is already working, i just have to implement that i get the image at the current play time.
Thanks!
Objective-C
AVAsset *asset = [AVAsset assetWithURL:sourceURL];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime time = CMTimeMake(1, 1);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // CGImageRef won't be released by ARC
Swift
var asset = AVAsset.assetWithURL(sourceURL)
var imageGenerator = AVAssetImageGenerator(asset: asset!)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator!.copyCGImageAtTime(time, actualTime: nil)
var thumbnail = UIImage.imageWithCGImage(imageRef)
CGImageRelease(imageRef) // CGImageRef won't be released by ARC
Swift 3.0
var sourceURL = URL(string: "Your Asset URL")
var asset = AVAsset(url: sourceURL!)
var imageGenerator = AVAssetImageGenerator(asset: asset)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator.copyCGImage(at: time, actualTime: nil)
var thumbnail = UIImage(cgImage:imageRef)
Note : Interpret Swift code according to your swift version.
Try this
- (UIImage*)takeScreeenShot {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:vidURL
options:nil];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
imageGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60); // time range in which you want
screenshot
CGImageRef imgRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL
error:&err];
return [[UIImage alloc] initWithCGImage:imgRef];
}
Hope this helps !!!
Swift 2.x:
let asset = AVAsset(...)
let imageGenerator = AVAssetImageGenerator(asset: asset)
let screenshotTime = CMTime(seconds: 1, preferredTimescale: 1)
if let imageRef = try? imageGenerator.copyCGImageAtTime(screenshotTime, actualTime: nil) {
let image = UIImage(CGImage: imageRef)
// do something with your image
}
Add below code to generate thumbnail from video.
AVURLAsset *assetURL = [[AVURLAsset alloc] initWithURL:partOneUrl options:nil];
AVAssetImageGenerator *assetGenerator = [[AVAssetImageGenerator alloc] initWithAsset:assetURL];
assetGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef imgRef = [assetGenerator copyCGImageAtTime:time actualTime:NULL error:&err];
UIImage *one = [[UIImage alloc] initWithCGImage:imgRef];
This is how I get a shot of the current visible frame on the scene in Swift:
The key is to
get the current time of the player which is of type CMTime
convert that time into seconds of type Float64
switch the secondss back to CMTime using CMTimeMake. The first parameter which would be where the seconds goes should be cast to Int64
Code:
var myImage: UIImage?
guard let player = player else { return }
let currentTime: CMTime = player.currentTime() // step 1.
let currentTimeInSecs: Float64 = CMTimeGetSeconds(currentTime) // step 2.
let actionTime: CMTime = CMTimeMake(Int64(currentTimeInSecs), 1) // step 3.
let asset = AVAsset(url: fileUrl)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true // prevent image rotation
do{
let imageRef = try imageGenerator.copyCGImage(at: actionTime, actualTime: nil)
myImage = UIImage(cgImage: imageRef)
}catch let err as NSError{
print(err.localizedDescription)
}
Swift extension for generating thumbnails from video
extension AVPlayer {
func generateThumbnail(time: CMTime) -> UIImage? {
guard let asset = currentItem?.asset else { return nil }
let imageGenerator = AVAssetImageGenerator(asset: asset)
do {
let cgImage = try imageGenerator.copyCGImage(at: time, actualTime: nil)
return UIImage(cgImage: cgImage)
} catch {
print(error.localizedDescription)
}
return nil
}
}
When you need to create multiple thumbnails at once the class AVAssetImageGenerator is golden, as it provides an async way.
If you need a Thumbnail-Image of the player's current frame, simply render it's View (platform specific) or its Layer (platform independent):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGSize frameSize = _playerLayer.frame.size;
CGContextRef thumbnailContext = CGBitmapContextCreate(nil, frameSize.width, frameSize.height, 8, 0, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
[_playerLayer renderInContext:thumbnailContext];
CGImageRef playerThumbnail = CGBitmapContextCreateImage(thumbnailContext);
CGContextRelease(thumbnailContext);
This is super fast and works synchronously.
Code for 2022:
seconds = .. the normal human-meaning desired time position in the video
guard let pl = .. your player ..
guard let ite = pl.currentItem ..
let testGen = AVAssetImageGenerator(asset: ite.asset)
testGen.maximumSize = CGSize(width: 0, height: .. height of your preview box)
testGen.requestedTimeToleranceBefore = .zero // during development
// or something like ... CMTime(value: .. your tolerance .., timescale: 600)
testGen.requestedTimeToleranceAfter = .zero // during development
// ditto
if #available(tvOS 16, *) {
Task { [weak self] ..
do {
let ct = CMTime(value: CMTimeValue(seconds), timescale: 1)
// NOTE THE "1"
let (foundImage, foundTime) = try await testGen.image(at: ct)
let foundAsSecs = CMTimeGetSeconds(foundTime)
print("tried gen at \(seconds) found as \(foundAsSecs) \n")
self. .. your preview .image = UIImage(cgImage: foundImage)
} catch {
print("gen err \(error)")
}
}
}
Setting the two tolerances is a sophisticated issue, google.
Watch out for the gotchya where timescale of 1 is needed for the CMTime.