Apple demoed this code in their WWDC 2014 Session 608 video on best practices for SpriteKit.
AppDelegate.m
+ (instancetype)unarchiveFromFile:(NSString *)file {
/* Retrieve scene file path from the application bundle */
NSString *nodePath = [[NSBundle mainBundle] pathForResource:file ofType:#"sks"];
/* Unarchive the file to an SKScene object */
NSData *data = [NSData dataWithContentsOfFile:nodePath
options:NSDataReadingMappedIfSafe
error:nil];
NSKeyedUnarchiver *arch = [[NSKeyedUnarchiver alloc] initForReadingWithData:data];
[arch setClass:self forClassName:#"SKScene"];
SKScene *scene = [arch decodeObjectForKey:NSKeyedArchiveRootObjectKey];
[arch finishDecoding];
return scene;
}
I understand the gist of what it's doing, but what I'm confused about is how to utilize this code in for any other .sks file. I tried calling the unarchiveFromFile method from my GameScene.m class, but to no avail. I read the post here on this topic, but it did not clarify things.
EDIT
As per what was suggested by Okapi, I tried the following in a new OS X SceneKit project in the GameScene.m class:
#import "GameScene.h"
#implementation GameScene
-(void)didMoveToView:(SKView *)view {
SKNode *nodeInScene2 = [self childNodeWithName:#"object1"];
for (SKNode *blah in [SKScene unarchiveFromFile:#"Scene2"].children) {
[nodeInScene2 addChild:blah];
}
}
-(void)update:(CFTimeInterval)currentTime {
/* Called before each frame is rendered */
}
#end
I have 2 .sks files. The first is called GameScene.sks and that has a sprite in there called "object1". I would like to add children stored in "Scene2.sks". The loop in the didMoveToView method gives me an error. What am I doing wrong? This is what Apple did in their WWDC 608 video, but perhaps I'm missing something since I can't find their project online.
+unarchiveFromFile: is defined as a category on SKScene in GameViewController.m. You would need to copy that code if you want to use it somewhere else.
#implementation SKScene (Unarchive)
+ (instancetype)unarchiveFromFile:(NSString *)file {
/* Retrieve scene file path from the application bundle */
NSString *nodePath = [[NSBundle mainBundle] pathForResource:file ofType:#"sks"];
/* Unarchive the file to an SKScene object */
NSData *data = [NSData dataWithContentsOfFile:nodePath
options:NSDataReadingMappedIfSafe
error:nil];
NSKeyedUnarchiver *arch = [[NSKeyedUnarchiver alloc] initForReadingWithData:data];
[arch setClass:self forClassName:#"SKScene"];
SKScene *scene = [arch decodeObjectForKey:NSKeyedArchiveRootObjectKey];
[arch finishDecoding];
return scene;
}
#end
This one hung me up for a while too. I ended up declaring unarchiveFromFile in every subclass that needed it, then calling it when I wanted to transition scenes. Something like this...
if (contactQuery == (playerCategory | proceedCategory)) {
SKScene *levelTwo = [LevelTwo unarchiveFromFile:#"LevelTwo"];
[self.view presentScene:levelTwo transition:[SKTransition doorsCloseHorizontalWithDuration:0.5]];
}
Full sample code can be downloaded here.
Related
I am trying to fetch object frames from given image using MKLKIT. But my code is getting crashed
MLKObjectDetectorOptions *options = [[MLKObjectDetectorOptions alloc] init];
#import <Foundation/Foundation.h>
#import <MLKitObjectDetection/MLKitObjectDetection.h>
#import <MLKitObjectDetectionCommon/MLKObjectDetector.h>
#import <MLKitObjectDetection/MLKObjectDetectorOptions.h>
#import <MLKitObjectDetectionCommon/MLKObject.h>
#import <MLKitVision/MLKVisionImage.h>
#import "MLKitObjectDetection.h"
#implementation MLObjectDetection
- (NSMutableArray*)detectFrame:(UIImage*)image{
NSMutableArray *frames = [NSMutableArray new];
// Multiple object detection in static images
MLKObjectDetectorOptions *options = [[MLKObjectDetectorOptions alloc] init];
options.detectorMode = MLKObjectDetectorModeSingleImage;
options.shouldEnableMultipleObjects = YES;
MLKObjectDetector *objectDetector = [MLKObjectDetector objectDetectorWithOptions:options];
MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];
visionImage.orientation = image.imageOrientation;
NSError *error;
NSArray *objects = [objectDetector resultsInImage:visionImage error:&error];
if (error == nil) {
return frames;
}
if (objects.count == 0) {
// No objects detected.
}
for (MLKObject *object in objects) {
[frames addObject:[NSValue valueWithCGRect:object.frame]];
}
//TODO release memory
return frames;
}
#end
When your code crash, did you try to run your code inside Xcode and look into the console in the debug area to check what error message was printed out there? Usually that will give you some clue as to what led to the crash.
Based on your code, are you calling the detectFrame: method from the main UI thread? The synchronous MLKObjectDetector#resultsInImage:error: should never be called from the main UI thread. This is documented in its API reference. You can check out ML Kit's quickstart sample app here. It shows how to call both the synchronous MLKObjectDetector#resultsInImage:error: API and the asynchronous MLKObjectDetector#processImage:completion: API.
I have a memory leak that I cannot diagnose. I have tried multiple approaches to creating a seamlessly looped video - Besides AVPlayerLooper, all of the approaches I've encountered and tried involve creating an observer to watch AVPlayerItemDidPlayToEndTimeNotification and then either seeking to the beginning of the video (in the case of AVPlayer) or inserting the video to be looped into the video queue (in the case of AVQueuePlayer). Both seem to have similar performance, but both also have a consistent memory keep related to the seekToTime method (in the case of AVPlayer) and the insertItem method (in the case of AVQueuePlayer). My end goal is to create a subclass of SKVideoNode that loops by default. Below is my code for the subclass:
#import "SDLoopingVideoNode.h"
#import <AVFoundation/AVFoundation.h>
#interface SDLoopingVideoNode()
#property AVQueuePlayer *avQueuePlayer;
#property AVPlayerLooper *playerLooper;
#end
#implementation SDLoopingVideoNode
-(instancetype)initWithPathToResource:(NSString *)path withFiletype:(NSString *)filetype
{
if(self == [super init])
{
NSString *resourcePath = [[NSBundle mainBundle] pathForResource:path ofType:filetype];
NSURL *videoURL = [NSURL fileURLWithPath:resourcePath];
AVAsset *videoAsset = [AVAsset assetWithURL:videoURL];
AVPlayerItem * videoItem = [AVPlayerItem playerItemWithAsset:videoAsset];
self.avQueuePlayer = [[AVQueuePlayer alloc] initWithItems:#[videoItem]];
NSNotificationCenter *noteCenter = [NSNotificationCenter defaultCenter];
[noteCenter addObserverForName:AVPlayerItemDidPlayToEndTimeNotification
object:nil
queue:nil
usingBlock:^(NSNotification *note) {
AVPlayerItem *video = [[AVPlayerItem alloc] initWithURL:videoURL];
[self.avQueuePlayer insertItem:video afterItem:nil];
NSLog(#"Video changed");
}];
self = (SDLoopingVideoNode*)[[SKVideoNode alloc] initWithAVPlayer: self.avQueuePlayer];
return self;
}
return nil;
}
#end
And here is how the subclass is initialized in didMoveToView:
SDLoopingVideoNode *videoNode = [[SDLoopingVideoNode alloc]initWithPathToResource:#"147406" withFiletype:#"mp4"];
[videoNode setSize:CGSizeMake(self.size.width, self.size.height)];
[videoNode setAnchorPoint:CGPointMake(0.5, 0.5)];
[videoNode setPosition:CGPointMake(0, 0)];
[self addChild:videoNode];
[videoNode play];
Short answer is, you will not be able to get that working with AVPlayer. Believe me, I have tried. Instead, it is possible to do seamless looping by using the H264 hardware to decode and then re-encode each video frame as a keyframe, github link here. I have also built a seamless looping layer that supports a full alpha channel. Performance even for full screen 1x1 video on and iPad or iPad pro is great. Also, no memory leaks with this code.
I am very new to objective c and OpenEars so please forgive me if I have some messy code and if I am lost in very simple problem.
Anyhow, I have two controllers in this application. The first being the default ViewController and the second one being a new one that I made called ReplyManagerController.
The code in the ViewController basically uses the one in the tutorial with some (maybe more some) changes.
EDIT:
The app is supposed to be a basic app where a user says something and the app replies.
But the original problem was that I could not get the string to display or TTS to work when my ViewController got it's string from another class/controller.
The answer in my below mentions that it was probably because my other class was calling my ViewController without the self.fliteController initialized.
How would I initialize the ViewController with self.fliteController initialized?
ViewController.m
- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
NSString *strResult = hypothesis; //speech to text to string
ReplyManager* replyObject = [[ReplyManager alloc] init];
[replyObject speechResult:(NSString*)strResult viewController:self];
}
- (void) getReply:(NSString*)replyStr{
[self.fliteController say:replyStr withVoice:self.slt];
[self updateText:replyStr];
}
- (IBAction)updateText:(NSString*)replyStr{
labelOne.text = replyStr;
labelOne.adjustsFontSizeToFitWidth = YES;
labelOne.minimumFontSize = 0;
}
Any help will be great! Thanks!
ReplyManager.m
- (void) speechResult:(NSString*)strResult {
NSString *replystr;
NSString *greetings = #"Hi!";
NSString *sorry = #"Sorry I didn't catch that?";
ViewController* getReply = [[ViewController alloc] init];
if ([strResult isEqualToString:#"HELLO"])
{
replystr = greetings;
[getReply getReply:(NSString*)replystr];
}
else
{
replystr = sorry;
[getReply getReply:(NSString*)replystr];
}
}
EDIT 2:
viewDidLoad Method
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.fliteController = [[OEFliteController alloc] init];
self.slt = [[Slt alloc] init];
self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
[self.openEarsEventsObserver setDelegate:self];
}
ViewController* getReply = [[ViewController alloc] init];
Here you init a new ViewController which does not have self.fliteController defined most likely. You need to reuse previos controller, for example like this:
[replyObject speechResult:(NSString*)strResult viewController:self];
So you can use already initialized viewController later. Overall it is better to initialize objects like viewController or replyController beforehand, not inside callback methods.
It sounds like a timing issue where you're trying to use fliteController before it's been initialized.
In your ViewController class, where do you assign a value to the fliteController property? In an initializer? -(void)viewDidLoad?
In ReplyManagerController add:
ViewController* getReply = [[ViewController alloc] init];
// Add these lines
NSLog(getReply.fliteController); // Is this nil?
[getReply.view layoutIfNeeded];
NSLog(getReply.fliteController); // Is it still nil?
Does the above fix the problem? If so, you're probably initializing fliteController in -(void)viewDidLoad. What's the result of the two NSLog statements?
in my program I want the user to be able to:
record his voice,
pause the recording process,
listen to what he recorded
and then continue recording.
I have managed to get to the point where I can record and play the recordings with AVAudioRecorder and AVAudioPlayer. But whenever I try to record, pause recording and then play, the playing part fails with no error.
I can guess that the reason it's not playing is because the audio file hasn't been saved yet and is still in memory or something.
Is there a way I can play paused recordings?
If there is please tell me how
I'm using xcode 4.3.2
If you want to play the recording, then yes you have to stop recording before you can load the file into the AVAudioPlayer instance.
If you want to be able to playback some of the recording, then add more to the recording after listening to it, or say record in the middle.. then you're in for some trouble.
You have to create a new audio file and then combine them together.
This was my solution:
// Generate a composition of the two audio assets that will be combined into
// a single track
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
// grab the two audio assets as AVURLAssets according to the file paths
AVURLAsset* masterAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.masterFile] options:nil];
AVURLAsset* activeAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.newRecording] options:nil];
NSError* error = nil;
// grab the portion of interest from the master asset
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, masterAsset.duration)
ofTrack:[[masterAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:kCMTimeZero
error:&error];
if (error)
{
// report the error
return;
}
// append the entirety of the active recording
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, activeAsset.duration)
ofTrack:[[activeAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:masterAsset.duration
error:&error];
if (error)
{
// report the error
return;
}
// now export the two files
// create the export session
// no need for a retain here, the session will be retained by the
// completion handler since it is referenced there
AVAssetExportSession* exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession)
{
// report the error
return;
}
NSString* combined = #"combined file path";// create a new file for the combined file
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:combined]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
[exportSession exportAsynchronouslyWithCompletionHandler:^{
// export status changed, check to see if it's done, errored, waiting, etc
switch (exportSession.status)
{
case AVAssetExportSessionStatusFailed:
break;
case AVAssetExportSessionStatusCompleted:
break;
case AVAssetExportSessionStatusWaiting:
break;
default:
break;
}
NSError* error = nil;
// your code for dealing with the now combined file
}];
I can't take full credit for this work, but it was pieced together from the input of a couple of others:
AVAudioRecorder / AVAudioPlayer - append recording to file
(I can't find the other link at the moment)
We had the same requirements for our app as the OP described, and ran into the same issues (i.e., the recording has to be stopped, instead of paused, if the user wants to listen to what she has recorded up to that point). Our app (project's Github repo) uses AVQueuePlayer for playback and a method similar to kermitology's answer to concatenate the partial recordings, with some notable differences:
implemented in Swift
concatenates multiple recordings into one
no messing with tracks
The rationale behind the last item is that simple recordings with AVAudioRecorder will have one track, and the main reason for this whole workaround is to concatenate those single tracks in the assets (see Addendum 3). So why not use AVMutableComposition's insertTimeRange method instead, that takes an AVAsset instead of an AVAssetTrack?
Relevant parts: (full code)
import UIKit
import AVFoundation
class RecordViewController: UIViewController {
/* App allows volunteers to record newspaper articles for the
blind and print-impaired, hence the name.
*/
var articleChunks = [AVURLAsset]()
func concatChunks() {
let composition = AVMutableComposition()
/* `CMTimeRange` to store total duration and know when to
insert subsequent assets.
*/
var insertAt = CMTimeRange(start: kCMTimeZero, end: kCMTimeZero)
repeat {
let asset = self.articleChunks.removeFirst()
let assetTimeRange =
CMTimeRange(start: kCMTimeZero, end: asset.duration)
do {
try composition.insertTimeRange(assetTimeRange,
of: asset,
at: insertAt.end)
} catch {
NSLog("Unable to compose asset track.")
}
let nextDuration = insertAt.duration + assetTimeRange.duration
insertAt = CMTimeRange(start: kCMTimeZero, duration: nextDuration)
} while self.articleChunks.count != 0
let exportSession =
AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetAppleM4A)
exportSession?.outputFileType = AVFileType.m4a
exportSession?.outputURL = /* create URL for output */
// exportSession?.metadata = ...
exportSession?.exportAsynchronously {
switch exportSession?.status {
case .unknown?: break
case .waiting?: break
case .exporting?: break
case .completed?: break
case .failed?: break
case .cancelled?: break
case .none: break
}
}
/* Clean up (delete partial recordings, etc.) */
}
This diagram helped me to get around what expects what and inherited from where. (NSObject is implicitly implied as superclass where there is no inheritance arrow.)
Addendum 1: I had my reservations regarding the switch part instead of using KVO on AVAssetExportSessionStatus, but the docs are clear that exportAsynchronously's callback block "is invoked when writing is complete or in the event of writing failure".
Addendum 2: Just in case if someone has issues with AVQueuePlayer: 'An AVPlayerItem cannot be associated with more than one instance of AVPlayer'
Addendum 3: Unless you are recording in stereo, but mobile devices have one input as far as I know. Also, using fancy audio mixing would also require the use of AVCompositionTrack. A good SO thread: Proper AVAudioRecorder Settings for Recording Voice?
RecordAudioViewController.h
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>
#interface record_audio_testViewController : UIViewController <AVAudioRecorderDelegate> {
IBOutlet UIButton * btnStart;
IBOutlet UIButton * btnPlay;
IBOutlet UIActivityIndicatorView * actSpinner;
BOOL toggle;
//Variables setup for access in the class:
NSURL * recordedTmpFile;
AVAudioRecorder * recorder;
NSError * error;
}
#property (nonatomic,retain)IBOutlet UIActivityIndicatorView * actSpinner;
#property (nonatomic,retain)IBOutlet UIButton * btnStart;
#property (nonatomic,retain)IBOutlet UIButton * btnPlay;
- (IBAction) start_button_pressed;
- (IBAction) play_button_pressed;
#end
RecordAudioViewController.m
#synthesize actSpinner, btnStart, btnPlay;
- (void)viewDidLoad {
[super viewDidLoad];
//Start the toggle in true mode.
toggle = YES;
btnPlay.hidden = YES;
//Instanciate an instance of the AVAudioSession object.
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
//Setup the audioSession for playback and record.
//We could just use record and then switch it to playback leter, but
//since we are going to do both lets set it up once.
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: &error];
//Activate the session
[audioSession setActive:YES error: &error];
}
- (IBAction) start_button_pressed{
if(toggle)
{
toggle = NO;
[actSpinner startAnimating];
[btnStart setTitle:#"Stop Recording" forState: UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
//Begin the recording session.
//Error handling removed. Please add to your own code.
//Setup the dictionary object with all the recording settings that this
//Recording sessoin will use
//Its not clear to me which of these are required and which are the bare minimum.
//This is a good resource: http://www.totodotnet.net/tag/avaudiorecorder/
NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue :[NSNumber numberWithInt:kAudioFormatAppleIMA4] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
//Now that we have our settings we are going to instanciate an instance of our recorder instance.
//Generate a temp file for use by the recording.
//This sample was one I found online and seems to be a good choice for making a tmp file that
//will not overwrite an existing one.
//I know this is a mess of collapsed things into 1 call. I can break it out if need be.
recordedTmpFile = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent: [NSString stringWithFormat: #"%.0f.%#", [NSDate timeIntervalSinceReferenceDate] * 1000.0, #"caf"]]];
NSLog(#"Using File called: %#",recordedTmpFile);
//Setup the recorder to use this file and record to it.
recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error];
//Use the recorder to start the recording.
//Im not sure why we set the delegate to self yet.
//Found this in antother example, but Im fuzzy on this still.
[recorder setDelegate:self];
//We call this to start the recording process and initialize
//the subsstems so that when we actually say "record" it starts right away.
[recorder prepareToRecord];
//Start the actual Recording
[recorder record];
//There is an optional method for doing the recording for a limited time see
//[recorder recordForDuration:(NSTimeInterval) 10]
}
else
{
toggle = YES;
[actSpinner stopAnimating];
[btnStart setTitle:#"Start Recording" forState:UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
NSLog(#"Using File called: %#",recordedTmpFile);
//Stop the recorder.
[recorder stop];
}
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
-(IBAction) play_button_pressed{
//The play button was pressed...
//Setup the AVAudioPlayer to play the file that we just recorded.
AVAudioPlayer * avPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error];
[avPlayer prepareToPlay];
[avPlayer play];
}
- (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
//Clean up the temp file.
NSFileManager * fm = [NSFileManager defaultManager];
[fm removeItemAtPath:[recordedTmpFile path] error:&error];
//Call the dealloc on the remaining objects.
[recorder dealloc];
recorder = nil;
recordedTmpFile = nil;
}
- (void)dealloc {
[super dealloc];
}
#end
RecordAudioViewController.xib
take 2 Buttons. 1 for begin recording and another for Play recording
I have this error when building and running my project in xCode:
RootViewController may not respond to -parseXMLFileAtURL:
I'm attempting to develop the basic Apple RSS Reader from the tutorial at:
http://gigaom.com/apple/tutorial-build-a-simple-rss-reader-for-iphone/
my section of code that this error is occurring in looks like this:
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
if ([stories count] == 0)
{
NSString * path = #"http://feeds.feedburner.com/TheAppleBlog";
[self parseXMLFileAtURL:path];
}
cellSize = CGSizeMake([newsTable bounds].size.width, 60);
}
can anybody explain why this parseXMLFileAtURL command gives so much heartache?
Thanks
UPDATED***
I also define parseXMLFileAtURL in the same file; however, I placed that section of the code after the viewDidAppear method (my bad). So when I change the order of the methods that error goes away. But when I do that, I get another error, maybe you guys can help with that error too! here it is:
Class RootViewController does not implement the NSXMLParserDelegate protocol
within this section of code:
- (void)parseXMLFileAtURL:(NSString *)URL
{
stories = [[NSMutableArray alloc] init];
NSURL *xmlURL = [NSURL URLWithString:URL];
rssParser = [[NSXMLParser alloc] initWithContentsOfURL:xmlURL];
[rssParser setDelegate:self];
[rssParser setShouldProcessNamespaces:NO];
[rssParser setShouldReportNamespacePrefixes:NO];
[rssParser setShouldResolveExternalEntities:NO];
[rssParser parse];
}
The error occurs after the line: [rssParser setDelegate:self]; - what might be wrong with that?
In regards to your second question that RootViewController does not conform to the NSXMLParserDelegate protocol. Just add it like this in your RootViewController.h file:
#interface RootViewController : UIViewController <NSXMLParserDelegate> { .....
Silly question: Does your RootViewcontroller class have a method named -parseXMLFileAtURL: defined? If -parseXMLFileAtURL: comes after the method that calls it, you'll also need to declare it in your header.
Make sure you have parseXMLFileAtURL defined in your RootViewController.m
-(void)parseXMLFileAtURL:(NSString *)url
{
...
}
And make sure you have it defined in your header as:
-(void)parseXMLFileAtURL:(NSString *)url;
Also, make sure that when you try to get the contents from the web, you're using an NSURL, not an NSString. You can instantiate a NSURL with a string by:
NSURL *urlFromString = [[NSURL alloc] initWithString:#"http://..."];