How to show polylines added in the source in mapbox ios? - objective-c

I'm trying to show the polylines taken from a GeoJson then added to the same source but with poor results.
NSString *jsonString = #"{\"type\": \"FeatureCollection\",\"features\": [{\"type\": \"Feature\",\"properties\": {},\"geometry\": {\"type\": \"LineString\",\"coordinates\": [[4.873809814453125,52.3755991766591],[4.882049560546875,52.339534544106435],[4.94659423828125,52.34708539110632],[4.94659423828125,52.376437538867776],[5.009765625,52.370568669179654]]}},{\"type\": \"Feature\",\"properties\": {},\"geometry\": {\"type\": \"LineString\",\"coordinates\": [[4.73785400390625,52.32694693334544],[4.882049560546875,52.32778621884898],[4.872436523437499,52.29420237796669],[4.9713134765625,52.340373590787394]]}}]}";
NSData *jsonData = [jsonString dataUsingEncoding:NSUTF8StringEncoding];
MGLShapeCollectionFeature *shapeCollectionFeature = (MGLShapeCollectionFeature *)[MGLShape shapeWithData:jsonData encoding:NSUTF8StringEncoding error:NULL];
MGLMultiPolyline *polylines = [MGLMultiPolyline multiPolylineWithPolylines:shapeCollectionFeature.shapes];
MGLShapeSource *source = [[MGLShapeSource alloc] initWithIdentifier:#"transit" shape:polylines options:nil];
[self.mapView.style addSource:source];
MGLLineStyleLayer *lineLayer = [[MGLLineStyleLayer alloc] initWithIdentifier:#"layer" source:source];
[self.mapView.style addLayer:lineLayer];
I logged the source object and inside there are the two polylines. But why are they not shown? what am I doing wrong?
I use mapbox sdk 3.7.6 for ios.

Are you using the -[MGLMapViewDelegate mapView:didFinishLoadingStyle] method to ensure that your map has properly initialized before you are adding the style layer? If not, you are likely running into a race issue where you are adding data that is immediately overwritten as the style is loading.
If you modify your code to ensure that the source and style aren't being added prematurely, I would expect your issues to resolve.
- (void)mapView:(MGLMapView *)mapView didFinishLoadingStyle:(MGLStyle *)style {
MGLShapeSource *source = [[MGLShapeSource alloc] initWithIdentifier:#"transit" shape:polylines options:nil];
[self.mapView.style addSource:source];
MGLLineStyleLayer *lineLayer = [[MGLLineStyleLayer alloc] initWithIdentifier:#"layer" source:source];
[self.mapView.style addLayer:lineLayer];
}
⚠️ Disclaimer: I currently work at Mapbox ⚠️

Related

how do I pass image data to a native module in React Native?

In my class that implements the RCTBridgeModule protocol in Xcode, I am trying to write an RCT_EXPORT_MEATHOD that I can expose to React Native code to consume image data. Currently, I can write an image to disk in React Native, and then pass the path through to the native method, but I'm wondering if there is a better technique to pass image data directly for better performance?
So instead of this:
RCT_EXPORT_METHOD(scanImage:(NSString *)path) {
UIImage *sampleImage = [[UIImage alloc] initWithContentsOfFile:path];
[self processImage: UIImage];
}
Something more like this:
RCT_EXPORT_METHOD(scanImage:(NSData *)imageData) {
[self processImageWithData: imageData];
}
You can use [RCTConvert UIImage:icon] method from #import <React/RCTConvert.h>
You need to specify "schema" as "data".
For more details look at the source code bellow:
https://github.com/facebook/react-native/blob/master/React/Base/RCTConvert.m#L762
For me, this works:
NSURL *url = [NSURL URLWithString:[imagePath stringByAddingPercentEncodingWithAllowedCharacters:[NSCharacterSet URLQueryAllowedCharacterSet]]];
UIImage *image = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:url]];
imagePath is NSString comes from the react-native bridge (using react-native-image-picker's response - response.uri).

AVCaptureSession resolution doesn't change with AVCaptureSessionPreset

I want to change the resolution of pictures I take with the camera on OS X with AV Foundation.
But even if I change the resolution of my AVCaptureSession, the output picture size doesn't change. I always have a 1280x720 picture.
I want a lower resolution because I use these pictures in a real time process and I want the program to be faster.
This is a sample of my code:
session = [[AVCaptureSession alloc] init];
if([session canSetSessionPreset:AVCaptureSessionPreset640x360]) {
[session setSessionPreset:AVCaptureSessionPreset640x360];
}
AVCaptureDeviceInput *device_input = [[AVCaptureDeviceInput alloc] initWithDevice:
[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] error:nil];
if([session canAddInput:device_input])
[session addInput:device_input];
still_image = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *output_settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[still_image setOutputSettings : output_settings];
[session addOutput:still_image];
What should I change in my code?
I have also run into this issue and found a solution that seems to work. For some reason on OS X, StillImageOutput breaks the capture session presets.
What I did was change the AVCaptureDevice's active format directly. Try this code right after you add your StillImageOutput to your Capture Session.
//Get a list of supported formats for the device
NSArray *supportedFormats = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] formats];
//Find the format closest to what you are looking for
// this is just one way of finding it
NSInteger desiredWidth = 640;
AVCaptureDeviceFormat *bestFormat;
for (AVCaptureDeviceFormat *format in supportedFormats) {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions((CMVideoFormatDescriptionRef)[format formatDescription]);
if (dimensions.width <= desiredWidth) {
bestFormat = format;
}
}
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] lockForConfiguration:nil];
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] setActiveFormat:bestFormat];
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo][0] unlockForConfiguration];
It is possible that there are other ways of fixing this issue but this is fixed it for me.
So I had this problem also, but using the raw AVCaptureVideoDataOutput instead of JPG.
The issue is that the session presets Low/Medium/High actually affects the capture device in some ways, for example framerate, but it won't change the hardware capture resolution - it will always capture at 1280x720. The idea, I think, is that a normal Quicktime output device will figure this out and add a scaling step to 640x480 (for example) if the session preset is to Medium.
But when using the raw outputs, they won't care about the preset desired dimensions.
The solution, contrary to Apples documentation on videoSettings, is to add the requested dimensions to the videoSettings:
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:640], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithDouble:480], (id)kCVPixelBufferHeightKey,
[NSNumber numberWithInt:kCMPixelFormat_422YpCbCr8_yuvs], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
[captureoutput setVideoSettings:outputSettings];
I say contrary to Apples docs, because the docs say that the FormatTypeKey is the only key allowed here. But the bufferheight/width keys actually do work and are needed.

How to add NSArray in a video with AVMutableMetadataItem?

I am trying to save video with custom NSArray metadata relevant for my app and trying to retrieve it when the user selects that video from the library.
I'm using the AVAssetExportSession function to add metadata.
I used the sample code AVMovieExporter, and I tried to change locationMetadata.value
http://developer.apple.com/library/ios/#samplecode/AVMovieExporter/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011364
AVMutableMetadataItem *locationMetadata = [[AVMutableMetadataItem alloc] init];
locationMetadata.key = AVMetadataCommonKeyLocation;
locationMetadata.keySpace = AVMetadataKeySpaceCommon;
locationMetadata.locale = self.locale;
//locationMetadata.value = [NSString stringWithFormat:#"%+08.4lf%+09.4lf", self.location.coordinate.latitude, self.location.coordinate.longitude];
locationMetadata.value = [[NSArray alloc] initWithObjects: #"abc", #123,nil];
If I use value as a NSString there is no problem, but if I use a NSArray, it doesn't save the metadata.
Where is the problem?
Apple's documentation states:
NSString *const AVMetadataCommonKeyLocation;
I understand that to say the value must be a string, and cannot be an array.

NSString WriteToFile not stay permanent?

I am trying to save text stored in an NSString variable in a text file that is stored with the main bundle of my project.
So far I have had no success and tried a lot of different methods.
Why doesn't this stay permanent?
NSString *pathToFile = [[NSString alloc]init];
pathToFile = [[NSBundle mainBundle] pathForResource:#"ListOfSavedImages" ofType:#"txt"];
NSLog(#"%#",pathToFile);
NSString *stringToWriteToFile = [[NSString alloc]init];
stringToWriteToFile=#"Adam";
NSLog(#"%#",stringToWriteToFile);
[stringToWriteToFile writeToFile:pathToFile atomically:YES encoding:NSUTF8StringEncoding error:NULL];
NSLog(#"called!");
NSString *contentsOfFile1 = [NSString stringWithContentsOfFile:pathToFile encoding:NSUTF8StringEncoding error:NULL];
NSLog(#"%#",contentsOfFile1);
The actual file doesn't change although the NSLog at the end of this code segment outputs "Adam" but I am also nslogging the contents of the file when the view loads and it always reverts back to the original text(it never actually changes). What am I doing wrong?
I am using Xcode 4.3, ARC, and storyboards.
As you are instantiating your variables locally, they will leak away when you hit the end of the block }.
Try using IVars declared as properties of the particular view controller, synthesized in the .m file.
Look at the C139p at Stanford Course on ITunes, preferably the earlier series given before ARC as this fully explains the concept of data persistence.

How do i grab numbers from a Table on a website in my Cocoa App?

Ok this is a more specific version of my last question.
So on a website there exists some data that is coded in HTML into a table.
In my Cocoa app, I want to download the html code of the website and then read through the html and snag the data from it. I was hoping someone could point out some useful classes/methods for accomplishing the retrieval of the website and putting it into some format where I can read through the code in my program?
Thanks in advance!
Try using hpple, it's an HTML parser for ObjC.
here's an example using it:
#import "TFHpple.h"
NSData *data = [[NSData alloc] initWithContentsOfFile:#"example.html"];
// Create parser
xpathParser = [[TFHpple alloc] initWithHTMLData:data];
//Get all the cells of the 2nd row of the 3rd table
NSArray *elements = [xpathParser search:#"//table[3]/tr[2]/td"];
// Access the first cell
TFHppleElement *element = [elements objectAtIndex:0];
// Get the text within the cell tag
NSString *content = [element content];
[xpathParser release];
[data release];