MLKit object detector is crashing in MLKObjectDetectorOptions - objective-c

I am trying to fetch object frames from given image using MKLKIT. But my code is getting crashed
MLKObjectDetectorOptions *options = [[MLKObjectDetectorOptions alloc] init];
#import <Foundation/Foundation.h>
#import <MLKitObjectDetection/MLKitObjectDetection.h>
#import <MLKitObjectDetectionCommon/MLKObjectDetector.h>
#import <MLKitObjectDetection/MLKObjectDetectorOptions.h>
#import <MLKitObjectDetectionCommon/MLKObject.h>
#import <MLKitVision/MLKVisionImage.h>
#import "MLKitObjectDetection.h"
#implementation MLObjectDetection
- (NSMutableArray*)detectFrame:(UIImage*)image{
NSMutableArray *frames = [NSMutableArray new];
// Multiple object detection in static images
MLKObjectDetectorOptions *options = [[MLKObjectDetectorOptions alloc] init];
options.detectorMode = MLKObjectDetectorModeSingleImage;
options.shouldEnableMultipleObjects = YES;
MLKObjectDetector *objectDetector = [MLKObjectDetector objectDetectorWithOptions:options];
MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];
visionImage.orientation = image.imageOrientation;
NSError *error;
NSArray *objects = [objectDetector resultsInImage:visionImage error:&error];
if (error == nil) {
return frames;
}
if (objects.count == 0) {
// No objects detected.
}
for (MLKObject *object in objects) {
[frames addObject:[NSValue valueWithCGRect:object.frame]];
}
//TODO release memory
return frames;
}
#end

When your code crash, did you try to run your code inside Xcode and look into the console in the debug area to check what error message was printed out there? Usually that will give you some clue as to what led to the crash.
Based on your code, are you calling the detectFrame: method from the main UI thread? The synchronous MLKObjectDetector#resultsInImage:error: should never be called from the main UI thread. This is documented in its API reference. You can check out ML Kit's quickstart sample app here. It shows how to call both the synchronous MLKObjectDetector#resultsInImage:error: API and the asynchronous MLKObjectDetector#processImage:completion: API.

Related

worklight 6.0 adapter native ios to hybrids application

I can able to call worklight 6.0 adapter invoke in IOS native code (using Objective-C) but i can not read adapter JSON response using cordova plugin from my hybrids application.
// invoke the adapter
MyConnectListener *connectListener = [[MyConnectListener alloc] initWithController:self];
[[WLClient sharedInstance] wlConnectWithDelegate:connectListener];
// calling the adapter using objective-c
WLProcedureInvocationData *myInvocationData = [[WLProcedureInvocationData alloc] initWithAdapterName:#"HTTP_WS_ADPTR" procedureName:#"getBalance"];
MyInvokeListener *invokeListener = [[MyInvokeListener alloc] initWithController: self];
[[WLClient sharedInstance] invokeProcedure:myInvocationData withDelegate:invokeListener];
CDVPluginResult *pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString:responseString];
[self.commandDelegate sendPluginResult:pluginResult callbackId:command.callbackId];
// hybrids application call the native code
cordova.exec(sayHelloSuccess, sayHelloFailure, "HelloWorldPlugin", "sayHello", [name]);
above the cordova.exec success and failed method is not return the value.
but i can not able to parse the value from CDVPluginResult method. Anybody please advice me. how can i read the adapter for IOS native from hybrids application.
Several things to note:
You are using Worklight 6.0.0.x. In Worklight 6.0.0.x there is no proper session sharing between web and native views. Meaning, if you will for example call the WL.Client.connect() method in the web view and then do a connect() and adapter invocation in the native view - these calls will not share the same session which can lead to race condition errors, inability to share state between the views and other unexpected events. Not recommended.
If this is the approach you're looking to implement in your Hybrid application it is therefore highly recommended that you will upgrade to either MobileFirst (previous known as "Worklight") v6.3 or v7.0 where session sharing between web and native views is now available out-of-the-box.
Although you might just want to opt to call the adapter from the JS code...
To get this to work as-is in your supplied project, you can change the implementation based on the below.
Note that the implementation below was based on MFP 7.0, as such the adapter invocation code will not work in your 6.0.0.0x codebase. You will need to alter it based on your own code in v6.0.0.x:
sayHello.h
#import <Foundation/Foundation.h>
#import <Cordova/CDV.h>
#import "WLClient.h"
#import "WLDelegate.h"
#interface SayHelloPlugin : CDVPlugin
- (void)sayHello:(CDVInvokedUrlCommand*)command;
- (void)callResult:(NSString*)response;
#end
sayHello.m
#import "SayHelloPlugin.h"
#import "MyConnectListener.h"
CDVInvokedUrlCommand *tempCommand;
#implementation SayHelloPlugin
- (void)sayHello:(CDVInvokedUrlCommand*)command {
MyConnectListener *connectListener = [[MyConnectListener alloc] init:self];
[[WLClient sharedInstance] wlConnectWithDelegate:connectListener];
tempCommand = command;
}
-(void)callResult:(NSString*)response{
CDVPluginResult *pluginResult =
[CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString:response];
[self.commandDelegate sendPluginResult:pluginResult callbackId:tempCommand.callbackId];
}
#end
MyConnectListener.h
#import <Foundation/Foundation.h>
#import "WLClient.h"
#import "WLDelegate.h"
#import "SayHelloPlugin.h"
#interface MyConnectListener : NSObject <WLDelegate> {
#private
SayHelloPlugin *sh;
}
- (id)init: (SayHelloPlugin *)sayHello;
#end
MyConnctListener.m
The responseText line is commented out because the data retrieved from the adapter was too large I suppose, so it's best to return only what you really need and not all of it.
#import "MyConnectListener.h"
#import "WLResourceRequest.h"
NSString *resultText;
NSString *request;
#implementation MyConnectListener
- (id)init: (SayHelloPlugin *) sayHello{
if ( self = [super init] )
{
sh = sayHello;
}
return self;
}
-(void)onSuccess:(WLResponse *)response{
NSURL* url = [NSURL URLWithString:#"/adapters/testAdapter/getStories"];
WLResourceRequest* request = [WLResourceRequest requestWithURL:url method:WLHttpMethodGet];
[request setQueryParameterValue:#"['technology']" forName:#"params"];
[request sendWithCompletionHandler:^(WLResponse *response, NSError *error) {
if(error != nil){
resultText = #"Invocation failure: ";
resultText = [resultText stringByAppendingString: error.description];
[sh callResult:resultText];
}
else{
resultText = #"Invocation success. ";
//resultText = [resultText stringByAppendingString:response.responseText];
[sh callResult:resultText];
}
}];
}
-(void)onFailure:(WLFailResponse *)response{
resultText = #"Connection failure: ";
resultText = [resultText stringByAppendingString:[response errorMsg]];
NSLog(#"***** failure response: %#", resultText);
[sh callResult:resultText];
}
#end

SpriteKit .sks files and subclassing

Apple demoed this code in their WWDC 2014 Session 608 video on best practices for SpriteKit.
AppDelegate.m
+ (instancetype)unarchiveFromFile:(NSString *)file {
/* Retrieve scene file path from the application bundle */
NSString *nodePath = [[NSBundle mainBundle] pathForResource:file ofType:#"sks"];
/* Unarchive the file to an SKScene object */
NSData *data = [NSData dataWithContentsOfFile:nodePath
options:NSDataReadingMappedIfSafe
error:nil];
NSKeyedUnarchiver *arch = [[NSKeyedUnarchiver alloc] initForReadingWithData:data];
[arch setClass:self forClassName:#"SKScene"];
SKScene *scene = [arch decodeObjectForKey:NSKeyedArchiveRootObjectKey];
[arch finishDecoding];
return scene;
}
I understand the gist of what it's doing, but what I'm confused about is how to utilize this code in for any other .sks file. I tried calling the unarchiveFromFile method from my GameScene.m class, but to no avail. I read the post here on this topic, but it did not clarify things.
EDIT
As per what was suggested by Okapi, I tried the following in a new OS X SceneKit project in the GameScene.m class:
#import "GameScene.h"
#implementation GameScene
-(void)didMoveToView:(SKView *)view {
SKNode *nodeInScene2 = [self childNodeWithName:#"object1"];
for (SKNode *blah in [SKScene unarchiveFromFile:#"Scene2"].children) {
[nodeInScene2 addChild:blah];
}
}
-(void)update:(CFTimeInterval)currentTime {
/* Called before each frame is rendered */
}
#end
I have 2 .sks files. The first is called GameScene.sks and that has a sprite in there called "object1". I would like to add children stored in "Scene2.sks". The loop in the didMoveToView method gives me an error. What am I doing wrong? This is what Apple did in their WWDC 608 video, but perhaps I'm missing something since I can't find their project online.
+unarchiveFromFile: is defined as a category on SKScene in GameViewController.m. You would need to copy that code if you want to use it somewhere else.
#implementation SKScene (Unarchive)
+ (instancetype)unarchiveFromFile:(NSString *)file {
/* Retrieve scene file path from the application bundle */
NSString *nodePath = [[NSBundle mainBundle] pathForResource:file ofType:#"sks"];
/* Unarchive the file to an SKScene object */
NSData *data = [NSData dataWithContentsOfFile:nodePath
options:NSDataReadingMappedIfSafe
error:nil];
NSKeyedUnarchiver *arch = [[NSKeyedUnarchiver alloc] initForReadingWithData:data];
[arch setClass:self forClassName:#"SKScene"];
SKScene *scene = [arch decodeObjectForKey:NSKeyedArchiveRootObjectKey];
[arch finishDecoding];
return scene;
}
#end
This one hung me up for a while too. I ended up declaring unarchiveFromFile in every subclass that needed it, then calling it when I wanted to transition scenes. Something like this...
if (contactQuery == (playerCategory | proceedCategory)) {
SKScene *levelTwo = [LevelTwo unarchiveFromFile:#"LevelTwo"];
[self.view presentScene:levelTwo transition:[SKTransition doorsCloseHorizontalWithDuration:0.5]];
}
Full sample code can be downloaded here.

Why isn't multithreading working in this implementation?

Q1: Can I call a method and have it execute on a background thread from inside another method that is currently executing on the main thread?
Q2: As an extension of the above, can I call a method and have it execute on a background thread from inside another method that is currently executing on some other background thread itself?
Q3: And one final question given the above : if I initialize an instance of some object X on some thread (main/background) and then have a method Y, of that object X, executing on some other background thread, can this method Y send messages and update an int property (e.g. of that Object X, or is such communication not possible ?
The reason I'm asking this last question is because I've been going over and over it again and I can't figure what is wrong here:
The following code returns zero acceleration and zero degrees values :
MotionHandler.m
#implementation MotionHandler
#synthesize currentAccelerationOnYaxis; // this is a double
-(void)startCompassUpdates
{
locationManager=[[CLLocationManager alloc] init];
locationManager.desiredAccuracy = kCLLocationAccuracyBest;
locationManager.delegate=self;
[locationManager startUpdatingHeading];
NSLog(#"compass updates initialized");
}
-(int) currentDegrees
{
return (int)locationManager.heading.magneticHeading;
}
-(void) startAccelerationUpdates
{
CMMotionManager *motionManager = [[CMMotionManager alloc] init];
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler:^(CMDeviceMotion *motion, NSError *error)
{
self.currentAccelerationOnYaxis = motion.userAcceleration.y;
}
];
}
#end
Tester.m
#implementation Tester
-(void)test
{
MotionHandler *currentMotionHandler = [[MotionHandler alloc] init];
[currentMotionHandler performSelectorInBackground:#selector(startCompassUpdates) withObject:nil];
[currentMotionHandler performSelectorInBackground:#selector(startAccelerationUpdates) withObject:nil];
while(1==1)
{
NSLog(#"current acceleration is %f", currentMotionHandler.currentAccelerationOnYaxis);
NSLog(#"current degrees are %i", [currentMotionHandler currentDegrees]);
}
SomeViewController.m
#implementation SomeViewController
-(void) viewDidLoad
{
[myTester performSelectorInBackground:#selector(test) withObject:nil];
}
#end
However, the following code returns those values normally :
Tester.m
#interface Tester()
{
CLLocationManager *locationManager;
double accelerationOnYaxis;
// more code..
}
#end
#implementation Tester
- (id) init
{
locationManager=[[CLLocationManager alloc] init];
locationManager.desiredAccuracy = kCLLocationAccuracyBest;
locationManager.delegate=self;
[locationManager startUpdatingHeading];
// more code..
}
-(void) test
{
CMMotionManager *motionManager = [[CMMotionManager alloc] init];
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue mainQueue]
withHandler:^(CMDeviceMotion *motion, NSError *error)
{
accelerationOnYaxis = motion.userAcceleration.y;
}
];
while(1==1)
{
NSLog(#"current acceleration is %f", accelerationOnYaxis);
NSLog(#"current degrees are %i", locationManager.heading.magneticHeading);
}
}
SomeViewController.m
#implementation SomeViewController
-(void) viewDidLoad
{
[myTester performSelectorInBackground:#selector(test) withObject:nil];
}
What's wrong with the first version? I really want to use that first one because it seems much better design-wise.. Thank you for any help!
Calling performSelectorInBackground:withObject: is the same as if you called the detachNewThreadSelector:toTarget:withObject: method of NSThread with the current object, selector, and parameter object as parameters (Threading Programming Guide). No matter where you call it, a new thread will be created to perform that selector. So to answer your first two questions: yes and yes.
For your final question, as long as this Object X is the same object in both methods, any of X's properties can be updated. But, beware that this can yield unexpected results (ie. see Concurrency Programming Guide). If multiple methods are updating X's property, values can be overwritten or disregarded. But, if you are only updating it from method Y and reading it from all other methods, such problems shouldn't occur.
You should take a look at the Grand Central Dispatch documentation from Apple. It allows you to use multiple threads in a block-based structure.
2 importants function are dispatch_sync() and dispatch_async().
Some examples:
To execute a certain block of code on a background thread and wait until it is finished:
__block id someVariable = nil;
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
// do some heavy work in the background
someVariable = [[NSObject alloc] init];
});
NSLog(#"Variable: %#", someVariable);
This function modifies the variable someVariable which you can use later on. Please note that the main thread will be paused to wait for the background thread. If that is not what you want, you can use dispatch_async() as follows:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
// do some heavy work in the background
NSObject *someVariable = [[NSObject alloc] init];
// notify main thread that the work is done
dispatch_async(dispatch_get_main_queue(), ^{
// call some function and pass someVariable to it, it will be called on the main thread
NSLog(#"Variable: %#", someVariable);
});
});

Play a paused AVAudioRecorder file

in my program I want the user to be able to:
record his voice,
pause the recording process,
listen to what he recorded
and then continue recording.
I have managed to get to the point where I can record and play the recordings with AVAudioRecorder and AVAudioPlayer. But whenever I try to record, pause recording and then play, the playing part fails with no error.
I can guess that the reason it's not playing is because the audio file hasn't been saved yet and is still in memory or something.
Is there a way I can play paused recordings?
If there is please tell me how
I'm using xcode 4.3.2
If you want to play the recording, then yes you have to stop recording before you can load the file into the AVAudioPlayer instance.
If you want to be able to playback some of the recording, then add more to the recording after listening to it, or say record in the middle.. then you're in for some trouble.
You have to create a new audio file and then combine them together.
This was my solution:
// Generate a composition of the two audio assets that will be combined into
// a single track
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
// grab the two audio assets as AVURLAssets according to the file paths
AVURLAsset* masterAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.masterFile] options:nil];
AVURLAsset* activeAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.newRecording] options:nil];
NSError* error = nil;
// grab the portion of interest from the master asset
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, masterAsset.duration)
ofTrack:[[masterAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:kCMTimeZero
error:&error];
if (error)
{
// report the error
return;
}
// append the entirety of the active recording
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, activeAsset.duration)
ofTrack:[[activeAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:masterAsset.duration
error:&error];
if (error)
{
// report the error
return;
}
// now export the two files
// create the export session
// no need for a retain here, the session will be retained by the
// completion handler since it is referenced there
AVAssetExportSession* exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession)
{
// report the error
return;
}
NSString* combined = #"combined file path";// create a new file for the combined file
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:combined]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
[exportSession exportAsynchronouslyWithCompletionHandler:^{
// export status changed, check to see if it's done, errored, waiting, etc
switch (exportSession.status)
{
case AVAssetExportSessionStatusFailed:
break;
case AVAssetExportSessionStatusCompleted:
break;
case AVAssetExportSessionStatusWaiting:
break;
default:
break;
}
NSError* error = nil;
// your code for dealing with the now combined file
}];
I can't take full credit for this work, but it was pieced together from the input of a couple of others:
AVAudioRecorder / AVAudioPlayer - append recording to file
(I can't find the other link at the moment)
We had the same requirements for our app as the OP described, and ran into the same issues (i.e., the recording has to be stopped, instead of paused, if the user wants to listen to what she has recorded up to that point). Our app (project's Github repo) uses AVQueuePlayer for playback and a method similar to kermitology's answer to concatenate the partial recordings, with some notable differences:
implemented in Swift
concatenates multiple recordings into one
no messing with tracks
The rationale behind the last item is that simple recordings with AVAudioRecorder will have one track, and the main reason for this whole workaround is to concatenate those single tracks in the assets (see Addendum 3). So why not use AVMutableComposition's insertTimeRange method instead, that takes an AVAsset instead of an AVAssetTrack?
Relevant parts: (full code)
import UIKit
import AVFoundation
class RecordViewController: UIViewController {
/* App allows volunteers to record newspaper articles for the
blind and print-impaired, hence the name.
*/
var articleChunks = [AVURLAsset]()
func concatChunks() {
let composition = AVMutableComposition()
/* `CMTimeRange` to store total duration and know when to
insert subsequent assets.
*/
var insertAt = CMTimeRange(start: kCMTimeZero, end: kCMTimeZero)
repeat {
let asset = self.articleChunks.removeFirst()
let assetTimeRange =
CMTimeRange(start: kCMTimeZero, end: asset.duration)
do {
try composition.insertTimeRange(assetTimeRange,
of: asset,
at: insertAt.end)
} catch {
NSLog("Unable to compose asset track.")
}
let nextDuration = insertAt.duration + assetTimeRange.duration
insertAt = CMTimeRange(start: kCMTimeZero, duration: nextDuration)
} while self.articleChunks.count != 0
let exportSession =
AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetAppleM4A)
exportSession?.outputFileType = AVFileType.m4a
exportSession?.outputURL = /* create URL for output */
// exportSession?.metadata = ...
exportSession?.exportAsynchronously {
switch exportSession?.status {
case .unknown?: break
case .waiting?: break
case .exporting?: break
case .completed?: break
case .failed?: break
case .cancelled?: break
case .none: break
}
}
/* Clean up (delete partial recordings, etc.) */
}
This diagram helped me to get around what expects what and inherited from where. (NSObject is implicitly implied as superclass where there is no inheritance arrow.)
Addendum 1: I had my reservations regarding the switch part instead of using KVO on AVAssetExportSessionStatus, but the docs are clear that exportAsynchronously's callback block "is invoked when writing is complete or in the event of writing failure".
Addendum 2: Just in case if someone has issues with AVQueuePlayer: 'An AVPlayerItem cannot be associated with more than one instance of AVPlayer'
Addendum 3: Unless you are recording in stereo, but mobile devices have one input as far as I know. Also, using fancy audio mixing would also require the use of AVCompositionTrack. A good SO thread: Proper AVAudioRecorder Settings for Recording Voice?
RecordAudioViewController.h
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>
#interface record_audio_testViewController : UIViewController <AVAudioRecorderDelegate> {
IBOutlet UIButton * btnStart;
IBOutlet UIButton * btnPlay;
IBOutlet UIActivityIndicatorView * actSpinner;
BOOL toggle;
//Variables setup for access in the class:
NSURL * recordedTmpFile;
AVAudioRecorder * recorder;
NSError * error;
}
#property (nonatomic,retain)IBOutlet UIActivityIndicatorView * actSpinner;
#property (nonatomic,retain)IBOutlet UIButton * btnStart;
#property (nonatomic,retain)IBOutlet UIButton * btnPlay;
- (IBAction) start_button_pressed;
- (IBAction) play_button_pressed;
#end
RecordAudioViewController.m
#synthesize actSpinner, btnStart, btnPlay;
- (void)viewDidLoad {
[super viewDidLoad];
//Start the toggle in true mode.
toggle = YES;
btnPlay.hidden = YES;
//Instanciate an instance of the AVAudioSession object.
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
//Setup the audioSession for playback and record.
//We could just use record and then switch it to playback leter, but
//since we are going to do both lets set it up once.
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: &error];
//Activate the session
[audioSession setActive:YES error: &error];
}
- (IBAction) start_button_pressed{
if(toggle)
{
toggle = NO;
[actSpinner startAnimating];
[btnStart setTitle:#"Stop Recording" forState: UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
//Begin the recording session.
//Error handling removed. Please add to your own code.
//Setup the dictionary object with all the recording settings that this
//Recording sessoin will use
//Its not clear to me which of these are required and which are the bare minimum.
//This is a good resource: http://www.totodotnet.net/tag/avaudiorecorder/
NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue :[NSNumber numberWithInt:kAudioFormatAppleIMA4] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
//Now that we have our settings we are going to instanciate an instance of our recorder instance.
//Generate a temp file for use by the recording.
//This sample was one I found online and seems to be a good choice for making a tmp file that
//will not overwrite an existing one.
//I know this is a mess of collapsed things into 1 call. I can break it out if need be.
recordedTmpFile = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent: [NSString stringWithFormat: #"%.0f.%#", [NSDate timeIntervalSinceReferenceDate] * 1000.0, #"caf"]]];
NSLog(#"Using File called: %#",recordedTmpFile);
//Setup the recorder to use this file and record to it.
recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error];
//Use the recorder to start the recording.
//Im not sure why we set the delegate to self yet.
//Found this in antother example, but Im fuzzy on this still.
[recorder setDelegate:self];
//We call this to start the recording process and initialize
//the subsstems so that when we actually say "record" it starts right away.
[recorder prepareToRecord];
//Start the actual Recording
[recorder record];
//There is an optional method for doing the recording for a limited time see
//[recorder recordForDuration:(NSTimeInterval) 10]
}
else
{
toggle = YES;
[actSpinner stopAnimating];
[btnStart setTitle:#"Start Recording" forState:UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
NSLog(#"Using File called: %#",recordedTmpFile);
//Stop the recorder.
[recorder stop];
}
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
-(IBAction) play_button_pressed{
//The play button was pressed...
//Setup the AVAudioPlayer to play the file that we just recorded.
AVAudioPlayer * avPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error];
[avPlayer prepareToPlay];
[avPlayer play];
}
- (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
//Clean up the temp file.
NSFileManager * fm = [NSFileManager defaultManager];
[fm removeItemAtPath:[recordedTmpFile path] error:&error];
//Call the dealloc on the remaining objects.
[recorder dealloc];
recorder = nil;
recordedTmpFile = nil;
}
- (void)dealloc {
[super dealloc];
}
#end
RecordAudioViewController.xib
take 2 Buttons. 1 for begin recording and another for Play recording

Objective-C: No objects in array after adding them. Out of scope!

I have a NSMutableArray in an object.
In an object-method, I do something like this:
/* ... */
[[LRResty client] get:connectURL withBlock:^(LRRestyResponse *r) {
SBJsonParser *jsonParser = [SBJsonParser new];
NSDictionary *jsonResponse = [jsonParser objectWithString:[r asString]];
NSDictionary *permittedBases= [jsonResponse objectForKey:#"permittedBases"];
Database *database = [[Database alloc] init];
for (id key in permittedBases) {
/* ... */
[workingDatabases addObject:database];
}
}];
return workingDatabases;
At the return line, there are no objects in my array (anymore). I am aware of the fact, that the 'database'-objects are going out of scope. But I am saving them in the array.
Am I overseeing something?
If it is of any help, here is the header file:
#class Database;
#interface CommunicationHelper : NSObject {
NSMutableArray *workingDatabases;
}
// The function where the problem appears:
- (NSMutableArray *)getDatabasesForWebsite:(Website *)websiteIn;
#property(nonatomic,copy) NSMutableArray *workingDatabases;
#end
just allocate your workingDatabases (Mutable array) somewhere before using that array.
Once you allocate it,It will work fine.
I assume it's because [LRResty client] get: is asynchronous. The block is called when the connection is finished, i.e. after the call to return.
//Called first
[[LRResty client] get:connectURL
//Called second
return workingDatabases;
//Called later when the connection is finished
SBJsonParser *jsonParser = [SBJsonParser new];
NSDictionary *jsonResponse = [jsonParser objectWithString:[r asString]];
NSDictionary *permittedBases= [jsonResponse objectForKey:#"permittedBases"];
Database *database = [[Database alloc] init];
for (id key in permittedBases) {
/* ... */
[workingDatabases addObject:database];
}
Edit
Ajeet has a valid point too, ensure your array is initialized.
I used the LRResty framework for accessing a RESTful webservice. It was an odd thing anyways, so I switched to a way more rich-featured framework, called "ASIHTTP". I would recommend that to anyone who wants to use RESTful services (and more) on iOS