I have a curious problem where my boolean variable is changing before I can assign it to a variable to be read in my react native code for Spotify SDK.
I have pinpointed the EXACT location as to where it changes and I am currently trying to figure out why:
I have a boolean called isLoggedIn that is set equal to 1 if a successful login occurs. This variable then sets loggedIn to a boolean value which is sent to my React Native code:
else
{
// login successful
NSLog(#"What is the value of isLoggedIn %#", [self isLoggedIn]);
////THIS shows that isLoggedIn still equal to 1.
[self initializePlayerIfNeeded:[RNSpotifyCompletion onComplete:^(id unused, RNSpotifyError* unusedError) {
NSLog(#"What is the value of isLoggedIn right after %#", [self isLoggedIn]);
/////NOW the isLoggedIn variable is now equal to 0.
// do UI logic on main thread
dispatch_async(dispatch_get_main_queue(), ^{
[authController.presentingViewController dismissViewControllerAnimated:YES completion:^{
_loggingIn = NO;
NSLog(#"What is the value of isLoggedIn %#", [self isLoggedIn]);
NSNumber* loggedIn = [self isLoggedIn]; /////this is the line we need to be == 1
resolve(loggedIn);
if(loggedIn.boolValue)
{
[self sendEvent:#"login" args:#[]];
}
}];
});
}]];
}
So obviously I assume that it is due to this line right?:
[self initializePlayerIfNeeded:[RNSpotifyCompletion onComplete:^(id unused, RNSpotifyError* unusedError)
Which is running this method: initializePlayerIfNeeded but when I read the value of the isLoggedIn variable at the end of this method it still shows a boolean value of 1 (look at last line):
-(void)initializePlayerIfNeeded:(RNSpotifyCompletion*)completion
{
if(![self hasPlayerScope])
{
[completion resolve:nil];
return;
}
// ensure only one thread is invoking the initialization at a time
BOOL initializedPlayer = NO;
NSError* error = nil;
BOOL allowCaching = (_cacheSize.unsignedIntegerValue > 0);
#synchronized(_player)
{
// check if player is already initialized
if(_player.initialized)
{
initializedPlayer = YES;
}
else
{
initializedPlayer = [_player startWithClientId:_auth.clientID audioController:nil allowCaching:allowCaching error:&error];
}
}
// handle initialization failure
if(!initializedPlayer)
{
[completion reject:[RNSpotifyError errorWithNSError:error]];
return;
}
// setup player
_player.delegate = self;
_player.playbackDelegate = self;
if(allowCaching)
{
_player.diskCache = [[SPTDiskCache alloc] initWithCapacity:_cacheSize.unsignedIntegerValue];
}
// attempt to log in the player
[self loginPlayer:completion];
NSLog(#"What is the value of isLoggedIn %#", [self isLoggedIn]);
////THIS shows that isLoggedIn still equal to 1.
}
So that method finishes up with the value still equal to 1, but why after that method returns does the next line show that isLoggedIn equal to 0? I am extremely new to objC so every bit of knowledge is helpful for me. I am using this React Native Spotify library from github: https://github.com/lufinkey/react-native-spotify
You can find the exact file I'm working with here: https://github.com/lufinkey/react-native-spotify/blob/master/ios/RNSpotify.m
What else would you need to help solve this?
EDIT: here's the method that changes the isLoggedIn:
RCT_EXPORT_BLOCKING_SYNCHRONOUS_METHOD(isLoggedIn)
{
if(!_initialized)
{
return #NO;
}
else if(_auth.session == nil)
{
return #NO;
}
return #YES;
}
Related
I create UIAlertView in my function, the problem is it shows a lot of time when the function runs, how can I create an if statement to show only one time like if UIAlertView shows not show any more.
- (void)showAlert {
_myAlertView = nil;
_myAlertView = [[UIAlertView alloc] initWithTitle:NSLocalizedString(#"Call_On_Hold",nil)
message:NSLocalizedString(#"Please_Wait",nil)
delegate:self
cancelButtonTitle:NSLocalizedString(#"Close_call",nil)
otherButtonTitles:nil, nil];
_myAlertView.tag = myAlertViewsTag;
[_myAlertView show];
}
Here is the function that my UIAlertView appear continuously instead of one time.
- (void) trafficTimerRun:(NSTimer*)theTimer
{
++ trafficTimerTicks;
pjmedia_rtcp_stat stat;
if (!get_stream_info([call_id intValue], &stat)) {
return;
}
LogDebug(TAG_SIP, #"Got %d bytes on stream 0 (previous: %d)", stat.rx.bytes, prev_bytes);
if (stat.rx.bytes == prev_bytes) {
if (trafficTimerTicks >= 10) {
// Steve-note: Here we need to show a pop-up message when the call in on hold when the trafficTimerTicks run.
[self showAlert];
LogError(TAG_SIP, #"No traffic received, hanging up");
// [theTimer invalidate];
// broken = YES; Steve note: The call shouldnt broke.
// [self hangup]; Steve note: The call shouldnt hangup.
}
}
}
Use a boolean:
bool alertIsShowing = false;
and in your updating method put something like this:
if (trafficTicks > 10){
if (!alertIsShowing){
alertIsShowing = true;
[self showAlert];
}
}
Then when your alert is dismissed, reset your boolean:
alertIsShowing = false;
These two function calls seem to be conflicting:
MagicalRecord.save({ (localContext) in
let items = NewsItem.staleNewsItems(in: localContext)
if ((items?.count)! > 0){
items?.forEach({ (item) in
if let object = item as? NSManagedObject {
object.mr_deleteEntity(in: localContext)
}
})
}
})
and
- (void) buildAndFetchFRCsInContext:(NSManagedObjectContext*)context {
self.newsItemsFRC = [self buildFetchResultsControllerForClass:[NewsItem class] sortedBy:#"id" withPredicate:nil inContext:context];
[context performBlock:^{
__unused NSDate* start = [NSDate date];
NSError* error;
[self.newsItemsFRC performFetch:&error]; // this line crashes
[self calculateAndBroadcastCounts];
}];
}
Is this save call thread safe? If so what could cause these two functions to cause each-other to crash?
The issue is I'm modifying the news items outside of the context they were created in. So to fix the issue I had to move the code to the main thread. I switched from using magical records save to just performBlockAndWait which is guaranteed to run on the calling thread:
private static func cleanUpNewsItems() -> Void {
let context = NSManagedObjectContext.mr_()
context.performAndWait {
var itemsToDelete = [NSManagedObject]()
if let items = NewsItem.staleNewsItems(in: context) {
items.forEach({ (item) in
itemsToDelete.append(item as! NSManagedObject)
})
}
for item in itemsToDelete {
context.delete(item)
}
do {
try context.save()
} catch let error as NSError {
print("Error While Deleting Note: \(error.userInfo)")
}
}
}
Perhaps I'm still struggling on the reactive learning curve but I am having a hard time figuring out how to bridge a non reactive class with the rest of my reactive code. I am using a category to extend the non-reactive class.
The property is just an Enum representing the current state of a network action, states like New, Submitted, Processing and Completed. Right now I have written the following method in my category:
#implementation JRequestBase (RACExtensions)
- (RACSignal*) rac_RequestStateSignal
{
return RACAble(self, state);
}
#end
However, when state transitions from Processing -> Completed or from any state to Errored I want this signal to send Completed or Error instead of Next Value. How can I accomplish this in a category? I want to do something like:
#implementation JRequestBase (RACExtensions)
- (RACSignal*) rac_RequestStateSignal
{
return [RACAble(self, state) map:^(NSNumber *state){
if ([state intValue] == iRequestStateComplete)
{
# SEND COMPLETE
}
else if ([state intValue] == iRequestStateErrored)
{
# SEND ERROR
}
else
{
return state;
}
}];
}
#end
edit: I took a look at the GHAPIDemo and have come up with the following:
- (RACSignal*) rac_RequestSignal
{
RACSubject *subject = [[RACReplaySubject alloc] init];
[[RACAble(self, state) subscribeNext:^(NSNumber* s){
if ( [s intValue] == JRequestStateCompleted)
{
[subject sendNext:self];
[subject sendCompleted];
}
else if ([s intValue] == JRequestStateErrored)
{
NSMutableDictionary *dict = [NSMutableDictionary dictionary];
// .. Set up dict with necessary values.
NSError *error = [NSError errorWithDomain:#"blah" code:1 userInfo:dict];
[subject sendError:error];
}
}];
return subject;
}
I'm not 100% sure this is the right way but it seems to be working.
Whenever you want to map values → signal events, instead of values → values, you should use -flattenMap: to return a signal corresponding to each input value. Then, as the "flatten" in the name implies, they'll be combined into one resulting signal.
However, this case is a little different, because you want to terminate the signal as soon as you get the Complete value. We'll use -takeUntilBlock: to represent that part.
The resulting code looks something like this:
- (RACSignal*) rac_RequestStateSignal
{
return [[RACObserve(self, state)
takeUntilBlock:^ BOOL (NSNumber *state){
return [state intValue] == iRequestStateComplete;
}]
flattenMap:^(NSNumber *state){
if ([state intValue] == iRequestStateErrored)
{
// Create a meaningful NSError here if you can.
return [RACSignal error:nil];
}
else
{
return [RACSignal return:state];
}
}];
}
(I used RACObserve because ReactiveCocoa 2.0 is now the only supported version, but you can use RACAble until you're ready to upgrade.)
As a general rule, you should avoid using subjects when possible, since they make code more stateful and reduce laziness.
I'm trying to take a video created using the iVidCap plugin and add audio to it. Basically the exact same thing as in this question: Writing video + generated audio to AVAssetWriterInput, audio stuttering. I've used the code from this post as a basis to try and modify the iVidCap.mm file myself, but the app always crashes in endRecordingSession.
I'm not sure how I need to modify endRecordingSession to accomodate for the audio (the original plugin just creates a video file). Here is the function:
- (int) endRecordingSession: (VideoDisposition) action {
NSLog(#"Start endRecordingSession");
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Auto released pool");
NSString *filePath;
BOOL success = false;
[videoWriterInput markAsFinished];
NSLog(#"Mark video writer input as finished");
//[audioWriterInput markAsFinished];
// Wait for the video status to become known.
// Is this really doing anything?
int status = videoWriter.status;
while (status == AVAssetWriterStatusUnknown) {
NSLog(#"Waiting for video to complete...");
[NSThread sleepForTimeInterval:0.5f];
status = videoWriter.status;
}
NSLog(#"Video completed");
#synchronized(self) {
success = [videoWriter finishWriting];
NSLog(#"Success: %#", success);
if (!success) {
// We failed to successfully finalize the video file.
NSLog(#"finishWriting returned NO");
} else {
// The video file was successfully written to the Documents folder.
filePath = [[self getDocumentsFileURL:videoFileName] path];
if (action == Save_Video_To_Album) {
// Move the video to an accessible location on the device.
NSLog(#"Temporary video filePath=%#", filePath);
if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(filePath)) {
NSLog(#"Video IS compatible. Adding it to photo album.");
UISaveVideoAtPathToSavedPhotosAlbum(filePath, self, #selector(copyToPhotoAlbumCompleteFromVideo: didFinishSavingWithError: contextInfo:), nil);
} else {
NSLog(#"Video IS NOT compatible. Could not be added to the photo album.");
success = NO;
}
} else if (action == Discard_Video) {
NSLog(#"Video cancelled. Removing temporary video file: %#", filePath);
[self removeFile:filePath];
}
}
[self cleanupWriter];
}
isRecording = false;
[pool drain];
return success; }
Right now it crashes on [videoWriter finishWriting]. I tried adding [audioWriterInput markAsFinished], but then it crashes on that. I would contact the original poster since it seems like they got it working, but there doesn't seem to be a way to send private messages.
Does anyone have any suggestions on how I can get this to work or why it's crashing? I've tried my best to figure this out but I'm pretty new to Obj-C. I can post the rest of the code if needed (a lot of it is in the original post referenced earlier).
The issue might actually be in the writeAudioBuffer function.
If you copied the code from that post but didnlt change it then you will certainly have some problems.
You need to do something like this:
if ( ![self waitForAudioWriterReadiness]) {
NSLog(#"WARNING: writeAudioBuffer dropped frame after wait limit reached.");
return 0;
}
OSStatus status;
CMBlockBufferRef bbuf = NULL;
CMSampleBufferRef sbuf = NULL;
size_t buflen = n * nchans * sizeof(float);
CMBlockBufferRef tmp_bbuf = NULL;
status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
samples,
buflen,
kCFAllocatorDefault,
NULL,
0,
buflen,
0,
&tmp_bbuf);
if (status != noErr || !tmp_bbuf) {
NSLog(#"CMBlockBufferCreateWithMemoryBlock error");
return -1;
}
// Copy the buffer so that we get a copy of the samples in memory.
// CMBlockBufferCreateWithMemoryBlock does not actually copy the data!
//
status = CMBlockBufferCreateContiguous(kCFAllocatorDefault, tmp_bbuf, kCFAllocatorDefault, NULL, 0, buflen, kCMBlockBufferAlwaysCopyDataFlag, &bbuf);
//CFRelease(tmp_bbuf); // causes abort?!
if (status != noErr) {
NSLog(#"CMBlockBufferCreateContiguous error");
//CFRelease(bbuf);
return -1;
}
CMTime timestamp = CMTimeMake(sample_position_, 44100);
status = CMAudioSampleBufferCreateWithPacketDescriptions(
kCFAllocatorDefault, bbuf, TRUE, 0, NULL, audio_fmt_desc_, 1, timestamp, NULL, &sbuf);
sample_position_ += n;
if (status != noErr) {
NSLog(#"CMSampleBufferCreate error");
return -1;
}
BOOL r = [audioWriterInput appendSampleBuffer:sbuf];
if (!r) {
NSLog(#"appendSampleBuffer error");
}
//CFRelease(bbuf); // crashes, don't know why.. Is there a leak here?
//CFRelease(sbuf);
return 0;
There are a few things to do with memory management that I am unsure on here.
Additionally be sure to use:
audioWriterInput.expectsMediaDataInRealTime = YES;
I'm using FMod for first time, and I don't understand why my code doesn't trigger Sound Designer's keyoff.
Working env
iOS
Xcode
Verified
.fev and event's keyoff tested with fmod_eventPlayer
all FOD_RESULT are OK
Here the code processed chronologically
-(void) initFmod
{
...
//init
result = _eventSystem->init(32, FMOD_INIT_NORMAL | FMOD_INIT_ENABLE_PROFILE, NULL, FMOD_EVENT_INIT_NORMAL);
...
//load music bank settings
result = FMOD_OK;
[[NSString stringWithFormat:#"%#/_music.fev", [[NSBundle mainBundle] resourcePath]] getCString:buffer maxLength:200 encoding:NSASCIIStringEncoding];
result = _eventSystem->load(buffer, NULL, NULL);
...
}
-(void) onMusicGameStart
{
///////////// LOAD Game Music ////////////
//Build Event name
FMOD_RESULT result = FMOD_OK;
NSString *musicGameEvent = #"music/music/music_sample_with_keyOff";
const char *eventGame = [musicGameEvent UTF8String];
//Get event from Fmod
result = _eventSystem->getEvent(eventGame, FMOD_EVENT_DEFAULT, &_musicGame);
result = _musicGame->start();
...
}
-(void) stopMusic
{
//Stop current Music
[self triggerEventKeyoff:_musicGame];
}
-(void) triggerEventKeyoff:(FMOD::Event*)event
{
if(event)
{
FMOD_RESULT result = FMOD_OK;
//Get Event's Parameter
FMOD::EventParameter *param;
result = event->getParameterByIndex(0, ¶m);
//Check error message
[self checkResult:result even:nil];
//trigger KeyOff
if(result == FMOD_OK)
{
result = param->keyOff();
//Check error message
[self checkResult:result even:nil];
}
}
}
The music associated to _musicGame doesn't play its KeyOff and just continue playing.
_musicGame is only set in onMusicGameStart().
I don't know what to test from this point.
By the way, I'm not able to launch fmod_profiler (crash at launch).
Thanks for your replies.
There is a bug with the current fmod_profiler, it's simple to fix though:
Open the terminal and navigate to the location of fmod_profiler.app
Navigate into fmod_profiler.app/Contents/MacOS
Type: "chmod u+x fmod_profiler
Now you can run the app properly from the finder.
Regarding keyoff, I would contact FMOD support.