Converting Objective-C Block into Closure the Swift way - objective-c

I have successfully converted the following method from Obj-C to Swift:
After learning how blocks are replaced by closures in Swift.
Obj-C:
- (RACSignal *)fetchCurrentConditionsForLocation:(CLLocationCoordinate2D)coordinate {
NSString *urlString = [NSString stringWithFormat:#"http://api.openweathermap.org/data/2.5/weather?lat=%f&lon=%f&units=imperial", coordinate.latitude, coordinate.longitude];
NSURL *url = [NSURL URLWithString:urlString];
return [[self fetchJSONFromURL:url] map:^(NSDictionary *json) {
return [MTLJSONAdapter modelOfClass:[WXCondition class] fromJSONDictionary:json error:nil];
}];
}
Swift:
func fetchCurrentConditionsForLocation(coordinate: CLLocationCoordinate2D) -> RACSignal {
let urlString = NSString(format: "http://api.openweathermap.org/data/2.5/weather?lat=%f&lon=%f&units=metric", coordinate.latitude, coordinate.longitude)
let url = NSURL.URLWithString(urlString)
return fetchJSONFromURL(url).map { json in
return MTLJSONAdapter.modelOfClass(WXCondition.self, fromJSONDictionary: json as NSDictionary, error: nil)
}
}
However, I'm having trouble converting the following return map block to Swift:
func fetchHourlyForecastForLocation(coordinate: CLLocationCoordinate2D) -> RACSignal {
var urlString = NSString(format: "http://api.openweathermap.org/data/2.5/forecast?lat=%f&lon=%f&units=metric&cnt=12", coordinate.latitude, coordinate.longitude)
let url = NSURL.URLWithString(urlString)
/* Original Obj-C:
return [[self fetchJSONFromURL:url] map:^(NSDictionary *json) {
RACSequence *list = [json[#"list"] rac_sequence];
return [[list map:^(NSDictionary *item) {
return [MTLJSONAdapter modelOfClass:[WXCondition class] fromJSONDictionary:item error:nil];
}] array];
}];
}
*/
// My attempt at conversion to Swift
// (I can't resolve `rac_sequence` properly). Kind of confused
// as to how to cast it properly and return
// it as an "rac_sequence" array.
return fetchJSONFromURL(url).map { json in
let list = RACSequence()
list = [json["list"] rac_sequence]
return (list).map { item in {
return MTLJSONAdapter.modelOfClass(WXCondition.self, fromJSONDictionary: item as NSDictionary, error: nil)
} as NSArray
}
}
If it helps, this is what rac_sequence is:
- (RACSequence *)rac_sequence {
return [RACArraySequence sequenceWithArray:self offset:0];
}
RACArraySequence():
+ (instancetype)sequenceWithArray:(NSArray *)array offset:(NSUInteger)offset;
EDIT: The fetch method returns a RACSignal not an NSArray:
func fetchJSONFromURL(url: NSURL) -> RACSignal {
}

It looks like you simply forgot the return type in the function declaration. The declaration should look something like this:
func fetchHourlyForecastForLocation(coordinate: CLLocationCoordinate2D) -> RACSignal { //...rest of function
Because the return type is now named you can then remove the as NSArray cast in your return statement at the end of the function. Hope this helps!

Try this :
func fetchHourlyForecastForLocation(coordinate: CLLocationCoordinate2D,completion:(AnyObject) -> Void) {
//after getting the NSArray (maping)
completion(_array)
}

Related

How to cache images in Objective-C

I want to create a method to cache an image from an URL, I got the code in Swift since I had used it before, how can I do something similar to this in Objective-C:
import UIKit
let imageCache: NSCache = NSCache<AnyObject, AnyObject>()
extension UIImageView {
func loadImageUsingCacheWithUrlString(urlString: String) {
self.image = nil
if let cachedImage = imageCache.object(forKey: urlString as AnyObject) as? UIImage {
self.image = cachedImage
return
}
let url = URL(string: urlString)
if let data = try? Data(contentsOf: url!) {
DispatchQueue.main.async(execute: {
if let downloadedImage = UIImage(data: data) {
imageCache.setObject(downloadedImage, forKey: urlString as AnyObject)
self.image = downloadedImage
}
})
}
}
}
Before you convert this, you might consider refactoring to make it asynchronous:
One should never use Data(contentsOf:) for network requests because (a) it is synchronous and blocks the caller (which is a horrible UX, but also, in degenerate cases, can cause the watchdog process to kill your app); (b) if there is a problem, there’s no diagnostic information; and (c) it is not cancelable.
Rather than updating image property when done, you should consider completion handler pattern, so caller knows when the request is done and the image is processed. This pattern avoids race conditions and lets you have concurrent image requests.
When you use this asynchronous pattern, the URLSession runs its completion handlers on background queue. You should keep the processing of the image and updating of the cache on this background queue. Only the completion handler should be dispatched back to the main queue.
I infer from your answer, that your intent was to use this code in a UIImageView extension. You really should put this code in a separate object (I created a ImageManager singleton) so that this cache is not only available to image views, but rather anywhere where you might need images. You might, for example, do some prefetching of images outside of the UIImageView. If this code is buried in the
Thus, perhaps something like:
final class ImageManager {
static let shared = ImageManager()
enum ImageFetchError: Error {
case invalidURL
case networkError(Data?, URLResponse?)
}
private let imageCache = NSCache<NSString, UIImage>()
private init() { }
#discardableResult
func fetchImage(urlString: String, completion: #escaping (Result<UIImage, Error>) -> Void) -> URLSessionTask? {
if let cachedImage = imageCache.object(forKey: urlString as NSString) {
completion(.success(cachedImage))
return nil
}
guard let url = URL(string: urlString) else {
completion(.failure(ImageFetchError.invalidURL))
return nil
}
let task = URLSession.shared.dataTask(with: url) { data, response, error in
guard
error == nil,
let responseData = data,
let httpUrlResponse = response as? HTTPURLResponse,
200 ..< 300 ~= httpUrlResponse.statusCode,
let image = UIImage(data: responseData)
else {
DispatchQueue.main.async {
completion(.failure(error ?? ImageFetchError.networkError(data, response)))
}
return
}
self.imageCache.setObject(image, forKey: urlString as NSString)
DispatchQueue.main.async {
completion(.success(image))
}
}
task.resume()
return task
}
}
And you'd call it like:
ImageManager.shared.fetchImage(urlString: someUrl) { result in
switch result {
case .failure(let error): print(error)
case .success(let image): // do something with image
}
}
// but do not try to use `image` here, as it has not been fetched yet
If you wanted to use this in a UIImageView extension, for example, you could save the URLSessionTask, so that you could cancel it if you requested another image before the prior one finished. (This is a very common scenario if using this in table views and the user scrolls very quickly, for example. You do not want to get backlogged in a ton of network requests.) We could
extension UIImageView {
private static var taskKey = 0
private static var urlKey = 0
private var currentTask: URLSessionTask? {
get { objc_getAssociatedObject(self, &Self.taskKey) as? URLSessionTask }
set { objc_setAssociatedObject(self, &Self.taskKey, newValue, .OBJC_ASSOCIATION_RETAIN_NONATOMIC) }
}
private var currentURLString: String? {
get { objc_getAssociatedObject(self, &Self.urlKey) as? String }
set { objc_setAssociatedObject(self, &Self.urlKey, newValue, .OBJC_ASSOCIATION_RETAIN_NONATOMIC) }
}
func setImage(with urlString: String) {
if let oldTask = currentTask {
currentTask = nil
oldTask.cancel()
}
image = nil
currentURLString = urlString
let task = ImageManager.shared.fetchImage(urlString: urlString) { result in
// only reset if the current value is for this url
if urlString == self.currentURLString {
self.currentTask = nil
self.currentURLString = nil
}
// now use the image
if case .success(let image) = result {
self.image = image
}
}
currentTask = task
}
}
There are tons of other things you might do in this UIImageView extension (e.g. placeholder images or the like), but by separating the UIImageView extension from the network layer, one keeps these different tasks in their own respective classes (in the spirit of the single responsibility principle).
OK, with that behind us, let us look at the Objective-C rendition. For example, you might create an ImageManager singleton:
// ImageManager.h
#import UIKit;
NS_ASSUME_NONNULL_BEGIN
typedef NS_ENUM(NSUInteger, ImageManagerError) {
ImageManagerErrorInvalidURL,
ImageManagerErrorNetworkError,
ImageManagerErrorNotValidImage
};
#interface ImageManager : NSObject
// if you make this singleton, mark normal instantiation methods as unavailable ...
+ (instancetype)alloc __attribute__((unavailable("alloc not available, call sharedImageManager instead")));
- (instancetype)init __attribute__((unavailable("init not available, call sharedImageManager instead")));
+ (instancetype)new __attribute__((unavailable("new not available, call sharedImageManager instead")));
- (instancetype)copy __attribute__((unavailable("copy not available, call sharedImageManager instead")));
// ... and expose singleton access point
#property (class, nonnull, readonly, strong) ImageManager *sharedImageManager;
// provide fetch method
- (NSURLSessionTask * _Nullable)fetchImageWithURLString:(NSString *)urlString completion:(void (^)(UIImage * _Nullable image, NSError * _Nullable error))completion;
#end
NS_ASSUME_NONNULL_END
and then implement this singleton:
// ImageManager.m
#import "ImageManager.h"
#interface ImageManager()
#property (nonatomic, strong) NSCache<NSString *, UIImage *> *imageCache;
#end
#implementation ImageManager
+ (instancetype)sharedImageManager {
static dispatch_once_t onceToken;
static ImageManager *shared;
dispatch_once(&onceToken, ^{
shared = [[self alloc] initPrivate];
});
return shared;
}
- (instancetype)initPrivate
{
self = [super init];
if (self) {
_imageCache = [[NSCache alloc] init];
}
return self;
}
- (NSURLSessionTask *)fetchImageWithURLString:(NSString *)urlString completion:(void (^)(UIImage *image, NSError *error))completion {
UIImage *cachedImage = [self.imageCache objectForKey:urlString];
if (cachedImage) {
completion(cachedImage, nil);
return nil;
}
NSURL *url = [NSURL URLWithString:urlString];
if (!url) {
NSError *error = [NSError errorWithDomain:[[NSBundle mainBundle] bundleIdentifier] code:ImageManagerErrorInvalidURL userInfo:nil];
completion(nil, error);
return nil;
}
NSURLSessionTask *task = [NSURLSession.sharedSession dataTaskWithURL:url completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
if (error) {
dispatch_async(dispatch_get_main_queue(), ^{
completion(nil, error);
});
return;
}
if (!data) {
NSError *error = [NSError errorWithDomain:[[NSBundle mainBundle] bundleIdentifier] code:ImageManagerErrorNetworkError userInfo:nil];
dispatch_async(dispatch_get_main_queue(), ^{
completion(nil, error);
});
}
UIImage *image = [UIImage imageWithData:data];
if (!image) {
NSDictionary *userInfo = #{
#"data": data,
#"response": response ? response : [NSNull null]
};
NSError *error = [NSError errorWithDomain:[[NSBundle mainBundle] bundleIdentifier] code:ImageManagerErrorNotValidImage userInfo:userInfo];
dispatch_async(dispatch_get_main_queue(), ^{
completion(nil, error);
});
}
[self.imageCache setObject:image forKey:urlString];
dispatch_async(dispatch_get_main_queue(), ^{
completion(image, nil);
});
}];
[task resume];
return task;
}
#end
And you'd call it like:
[[ImageManager sharedImageManager] fetchImageWithURLString:urlString completion:^(UIImage * _Nullable image, NSError * _Nullable error) {
if (error) {
NSLog(#"%#", error);
return;
}
// do something with `image` here ...
}];
// but not here, because the above runs asynchronously
And, again, you could use this from within a UIImageView extension:
#import <objc/runtime.h>
#implementation UIImageView (Cache)
- (void)setImage:(NSString *)urlString
{
NSURLSessionTask *oldTask = objc_getAssociatedObject(self, &taskKey);
if (oldTask) {
objc_setAssociatedObject(self, &taskKey, nil, OBJC_ASSOCIATION_RETAIN_NONATOMIC);
[oldTask cancel];
}
image = nil
objc_setAssociatedObject(self, &urlKey, urlString, OBJC_ASSOCIATION_RETAIN_NONATOMIC);
NSURLSessionTask *task = [[ImageManager sharedImageManager] fetchImageWithURLString:urlString completion:^(UIImage * _Nullable image, NSError * _Nullable error) {
NSString *currentURL = objc_getAssociatedObject(self, &urlKey);
if ([currentURL isEqualToString:urlString]) {
objc_setAssociatedObject(self, &urlKey, nil, OBJC_ASSOCIATION_RETAIN_NONATOMIC);
objc_setAssociatedObject(self, &taskKey, nil, OBJC_ASSOCIATION_RETAIN_NONATOMIC);
}
if (image) {
self.image = image;
}
}];
objc_setAssociatedObject(self, &taskKey, task, OBJC_ASSOCIATION_RETAIN_NONATOMIC);
}
#end
After trial and error this worked:
#import "UIImageView+Cache.h"
#implementation UIImageView (Cache)
NSCache* imageCache;
- (void)loadImageUsingCacheWithUrlString:(NSString*)urlString {
imageCache = [[NSCache alloc] init];
self.image = nil;
UIImage *cachedImage = [imageCache objectForKey:(id)urlString];
if (cachedImage != nil) {
self.image = cachedImage;
return;
}
NSURL *url = [NSURL URLWithString:urlString];
NSData *data = [NSData dataWithContentsOfURL:url];
if (data != nil) {
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *downloadedImage = [UIImage imageWithData:data];
if (downloadedImage != nil) {
[imageCache setObject:downloadedImage forKey:urlString];
self.image = downloadedImage;
}
});
}
}
#end

How to extract performance metrics measured by measureBlock in XCTest

I have a simple test function which will tap a button an measure the performance. I'm using XCTest. After measureBlock returns I can see a bunch of perf-metrics on the console. I would like to get this within the test-program such that I can populate the data somewhere else programmatically. Watching the test data on test-console is proving to be slow because I have a lot of test-cases.
- (void)testUseMeasureBlock {
XCUIElement *launchTest1Button = [[XCUIApplication alloc] init].buttons[#"Launch Test 1"];
void (^blockToMeasure)(void) = ^void(void) {
[launchTest1Button tap];
};
// Run once to warm up any potential caching properties
#autoreleasepool {
blockToMeasure();
}
// Now measure the block
[self measureBlock:blockToMeasure];
/// Collect the measured metrics and send somewhere.
When we run a test it prints:
measured [Time, seconds] average: 0.594, relative standard deviation: 0.517%, values: [0.602709, 0.593631, 0.593004, 0.592350, 0.596199, 0.593807, 0.591444, 0.593460, 0.592648, 0.592769],
If I could get the average time, that'd be sufficient for now.
Since there is no API to get this data you can pipe stderr stream and parse tests logs to get needed info e.g. average time. For instance you can use next approach:
#interface MeasureParser : NSObject
#property (nonatomic) NSPipe* pipe;
#property (nonatomic) NSRegularExpression* regex;
#property (nonatomic) NSMutableDictionary* results;
#end
#implementation MeasureParser
- (instancetype)init {
self = [super self];
if (self) {
self.pipe = NSPipe.pipe;
self.results = [NSMutableDictionary new];
let pattern = [NSString stringWithFormat:#"[^']+'\\S+\\s([^\\]]+)\\]'\\smeasured\\s\\[Time,\\sseconds\\]\\saverage:\\s([^,]+)"];
NSError* error = nil;
self.regex = [NSRegularExpression regularExpressionWithPattern:pattern options:NSRegularExpressionCaseInsensitive error:&error];
if (error) {
return nil;
}
}
return self;
}
- (void)capture:(void (^)(void))block {
// Save original output
int original = dup(STDERR_FILENO);
setvbuf(stderr, nil, _IONBF, 0);
dup2(self.pipe.fileHandleForWriting.fileDescriptor, STDERR_FILENO);
__weak let wself = self;
self.pipe.fileHandleForReading.readabilityHandler = ^(NSFileHandle *handle) {
var *str = [[NSString alloc] initWithData:handle.availableData encoding:NSUTF8StringEncoding];
let firstMatch = [wself.regex firstMatchInString:str options:NSMatchingReportCompletion range:NSMakeRange(0, str.length)];
if (firstMatch) {
let name = [str substringWithRange:[firstMatch rangeAtIndex:1]];
let average = [str substringWithRange:[firstMatch rangeAtIndex:2]];
wself.results[name] = average;
}
// Print to stdout because stderr is piped
printf("%s", [str cStringUsingEncoding:NSUTF8StringEncoding]);
};
block();
// Revert
fflush(stderr);
dup2(original, STDERR_FILENO);
close(original);
}
#end
How to use:
- (void)testPerformanceExample {
let measureParser = [MeasureParser new];
[measureParser capture:^{
[self measureBlock:^{
// Put the code you want to measure the time of here.
sleep(1);
}];
}];
NSLog(#"%#", measureParser.results);
}
// Outputs
{
testPerformanceExample = "1.001";
}
Swift 5 version
final class MeasureParser {
let pipe: Pipe = Pipe()
let regex: NSRegularExpression?
let results: NSMutableDictionary = NSMutableDictionary()
init() {
self.regex = try? NSRegularExpression(
pattern: "\\[(Clock Monotonic Time|CPU Time|Memory Peak Physical|Memory Physical|CPU Instructions Retired|Disk Logical Writes|CPU Cycles), (s|kB|kI|kC)\\] average: ([0-9\\.]*),",
options: .caseInsensitive)
}
func capture(completion: #escaping () -> Void) {
let original = dup(STDERR_FILENO)
setvbuf(stderr, nil, _IONBF, 0)
dup2(self.pipe.fileHandleForWriting.fileDescriptor, STDERR_FILENO)
self.pipe.fileHandleForReading.readabilityHandler = { [weak self] handle in
guard self != nil else { return }
let data = handle.availableData
let str = String(data: data, encoding: .utf8) ?? "<Non-ascii data of size\(data.count)>\n"
self!.fetchAndSaveMetrics(str)
// Print to stdout because stderr is piped
if let copy = (str as NSString?)?.cString(using: String.Encoding.utf8.rawValue) {
print("\(copy)")
}
}
completion()
fflush(stderr)
dup2(original, STDERR_FILENO)
close(original)
}
private func fetchAndSaveMetrics(_ str: String) {
guard let mRegex = self.regex else { return }
let matches = mRegex.matches(in: str, options: .reportCompletion, range: NSRange(location: 0, length: str.count))
matches.forEach {
let nameIndex = Range($0.range(at: 1), in: str)
let averageIndex = Range($0.range(at: 3), in: str)
if nameIndex != nil && averageIndex != nil {
let name = str[nameIndex!]
let average = str[averageIndex!]
self.results[name] = average
}
}
}
}
How to use it:
import XCTest
final class MyUiTests: XCTestCase {
var app: XCUIApplication!
let measureParser = MeasureParser()
// MARK: - XCTestCase
override func setUp() {
super.setUp()
continueAfterFailure = false
app = XCUIApplication()
app.launch()
}
override func tearDown() {
//FIXME: Just for debugging
print(self.measureParser.results)
print(self.measureParser.results["CPU Cycles"])
print(self.measureParser.results["CPU Instructions Retired"])
print(self.measureParser.results["CPU Time"])
print(self.measureParser.results["Clock Monotonic Time"])
print(self.measureParser.results["Disk Logical Writes"])
print(self.measureParser.results["Memory Peak Physical"])
print(self.measureParser.results["Memory Physical"])
}
// MARK: - Tests
func testListing() {
self.measureParser.capture { [weak self] in
guard let self = self else { return }
self.measureListingScroll()
}
}
// MARK: XCTest measures
private func measureListingScroll() {
measure(metrics: [XCTCPUMetric(), XCTClockMetric(), XCTMemoryMetric(), XCTStorageMetric()]) {
self.app.swipeUp()
self.app.swipeUp()
self.app.swipeUp()
}
}
}
There's a private instance variable __perfMetricsForID of XCTestCase store the result.
And you can access it by call
NSDictionary* perfMetrics = [testCase valueForKey:#"__perfMetricsForID"];
the result is just like this:

SWIFT Closure syntax - convert from Objective C

I have the following function written in Objective C using blocks and I am trying to convert it to swift, but I am banging my head against the wall and can't get it sorted.
Here is the code in Objective C
typedef void (^ResponseBlock) (BOOL succeeded, NSArray* data);
- (void)findAllMediaFromDate:(NSDate*)createdAtDate block:(ResponseBlock)block
{
NSMutableArray *results = [NSMutableArray array];
PFQuery *query = [PFQuery queryWithClassName:PARSE_MODEL_ACTIVITIES];
[query orderByDescending:#"createdAt"];
[query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) {
if (!error) {
for (ActivityObject *object in objects) {
if ([object.media.type isEqualToString: #"video"] || [object.media.type isEqualToString: #"photo"]) {
[results addObject:object];
}
}
block(YES, results);
}
else {
}
}];
}
Now here is what I have in SWIFT. It's a different function body, but the syntax I am trying is the same...
func userQuery (){ //This needs to return an array as ABOVE
var results = [UserModel]()
println("NM: userQuery")
var query = UserModel.query()
query.whereKey("objectId", equalTo:"7N0IWUffOZ")
query.findObjectsInBackgroundWithBlock { (objects:[AnyObject]!, error:NSError!) -> Void in
if (objects != nil) {
NSLog("yea")
for object in objects{
results.append(object as UserModel)
//Need to return the array to the userQuery function
}
} else {
NSLog("%#", error)
}
}
}
```
You can add the closure parameter like so:
func userQuery(completionHandler: (succeeded: Bool, array: [UserModel]?) -> ()) {
// do your other stuff here
if objects != nil {
// build results
completionHandler(succeeded: true, array: results)
} else {
// return failure
completionHandler(succeeded: false, array: nil)
}
}
Clearly, change your array parameter to be whatever you want (rename it, change the type, whatever), but the basic idea is to have an optional array return type.
And you can call it using the trailing closure syntax, like so:
userQuery() {
success, results in
if success {
// use results here
} else {
// handle error here
}
}
By the way, if you like the Objective-C typedef pattern, the equivalent in Swift is typealias:
typealias ResponseClosure = (succeeded: Bool, array: [UserModel]?) -> ()
func userQuery(completionHandler: ResponseClosure) {
// the same as above
}

How to convert code AVFoundation objective c to Swift?

I am using AVFoundation in swift for take pictures but I can't convert any func lines of code from objective c to Swift. My func code is:
- (void) capImage { //method to capture image from AVCaptureSession video feed
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
if (imageSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
[self processImage:[UIImage imageWithData:imageData]];
}
}];
}
This line send me error AnyObject[]does not conform to protocol sequencfe..:
for (AVCaptureInputPort *port in [connection inputPorts]) {
In swift:
for port:AnyObject in connection.inputPorts {
And I don't know how convert this line:
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
Can u help me to convert to swift?
Thanks!!
for (AVCaptureInputPort *port in [connection inputPorts]) { )
Arrays of AnyObject should be cast to arrays of your actual type before interating, like this:
for (port in connection.inputPorts as AVCaptureInputPort[]) { }
In terms of blocks to closures, you just have to get the syntax correct.
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageSampleBuffer, error) in // This line defines names the inputs
//...
}
Note that this also uses Trailing Closure Syntax. Do read up on the docs more!
EDIT: In terms of initializers, they now look like this:
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
self.processImage(UIImage(data:imageData))
Try this
var videoConnection :AVCaptureConnection?
if let videoConnection = self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo){
self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: { (buffer:CMSampleBuffer!, error: NSError!) -> Void in
if let exifAttachments = CMGetAttachment(buffer, kCGImagePropertyExifDictionary, nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
self.previewImage.image = UIImage(data: imageData)
UIImageWriteToSavedPhotosAlbum(self.previewImage.image, nil, nil, nil)
}
})
}
This should answer the problem with the ports:
if let videoConnection = stillImageOuput.connectionWithMediaType(AVMediaTypeVideo){//take a photo here}

How to cast object of class id to CGPoint for NSMutableArray?

If I have an object myObject of class id, how would I "cast" it as a CGPoint (given that I have performed introspection and know myObject to a CGPoint)? This is despite the fact that CGPoint is not a real Obj-C class.
Simply doing (CGPoint)myObject returns the following error:
Used type 'CGPoint' (aka 'struct CGPoint') where arithmetic or pointer type is required
I want to do this so that I can check if the object being passed to an NSMutableArray is a CGPoint, and if it is, to wrap the CGPoint in an NSValue automatically; e.g.:
- (void)addObjectToNewMutableArray:(id)object
{
NSMutableArray *myArray = [[NSMutableArray alloc] init];
id objectToAdd = object;
if ([object isKindOfClass:[CGPoint class]]) // pseudo-code, doesn't work
{
objectToAdd = [NSValue valueWithCGPoint:object];
}
[myArray addObject:objectToAdd];
return myArray;
}
ADDITIONAL CODE
Here are the functions I use to perform "introspection":
+ (BOOL)validateObject:(id)object
{
if (object)
{
if ([object isKindOfClass:[NSValue class]])
{
NSValue *value = (NSValue *)object;
if (CGPointEqualToPoint([value CGPointValue], [value CGPointValue]))
{
return YES;
}
else
{
NSLog(#"[TEST] Invalid object: object is not CGPoint");
return NO;
}
}
else
{
NSLog(#"[TEST] Invalid object: class not allowed (%#)", [object class]);
return NO;
}
}
return YES;
}
+ (BOOL)validateArray:(NSArray *)array
{
for (id object in array)
{
if (object)
{
if ([object isKindOfClass:[NSValue class]])
{
NSValue *value = (NSValue *)object;
if (!(CGPointEqualToPoint([value CGPointValue], [value CGPointValue])))
{
NSLog(#"[TEST] Invalid object: object is not CGPoint");
return NO;
}
}
else
{
NSLog(#"[TEST] Invalid object: class not allowed (%#)", [object class]);
return NO;
}
}
}
return YES;
}
+ (NSValue *)convertObject:(CGPoint)object
{
return [NSValue valueWithCGPoint:object];
}
A CGPoint is not an Objective-C object. You cannot pass one to your addObjectToNewMutableArray: method. The compiler will not let you.
You need to wrap the CGPoint in an NSValue and pass that wrapper to your addObjectToNewMutableArray: method.
If you have an NSValue and you want to test whether it contains a CGPoint, you can ask it like this:
if (strcmp([value objCType], #encode(CGPoint)) == 0) {
CGPoint point = [value CGPointValue];
...
}
a point isnt an object and therefore cant be cast to one...
or vice-versa
casting doesnt transform data it only changes how data is interpreted!
id is basically short for NSObject* btw