How to function dispatch_group? - objective-c

I am using dispatch_group in my code, but it's functionality not cleared to me.
I have used below code:
1.dispatch_group_t group = dispatch_group_create();
2.dispatch_group_enter(group);
3.[self exportVideoAsset:avAsset withRange:CMTimeRangeMake(start1, duration1) inGCDGroup:group];
4.dispatch_group_enter(group);
5.[self exportVideoAsset:avAsset withRange:CMTimeRangeMake(start2, duration2) inGCDGroup:group];
Here 2 and 4 line execute first and then execute exportVideoAsset function in line 3 and 5. But here in 3 and 5 line, function exportVideoAsset is execute without no sequential order. But I want exportVideoAsset function in line 3 will always execute first, then execute line 5.

You can try this it always call block1 first then after call block2.
Working fine on Xcode 11, iOS13
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group,dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {
// block1
NSLog(#"Block1");
});
[NSThread sleepForTimeInterval:1.0];
dispatch_group_async(group,dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {
// block2
NSLog(#"Block2");
});
or
dispatch_group_t group = dispatch_group_create();
dispatch_group_enter(group);
// Task is completed, so signal that it has finished
NSLog(#"Block1");
dispatch_group_leave(group);
// Make the second task wait until group1 has completed before running
dispatch_group_enter(group);
// Task is completed, so signal that it has finished
NSLog(#"Block2");
dispatch_group_leave(group);

Related

CAPL Canoe wait for a specific can message

I'm currently trying to Test Auto-Generated-Code for a Controller.
The test will be done in CANoe with Capl.
I've already tried a lot of things out and it's working good, but now I want to
test a "message lost".
I need something like this.
CAN 1 is sending a test message 10 Times. 3 Times there will be a Message lost.
CAN 2 which is receiving the Signals has to react to this with a specific value.
I Need something like WaitForMessage(int aTimeOut, Message yourMessage) which gives for example 0 for succesfully accessing the Message or -1 for timeOut.
on timer sendMessage
{
if(anzahlAnBotschaften > 0) // amount of sent Messages
{
if(anzahlAnBotschaften % 3 == 0) // 3 times message lost
{
botschaftWirdGesendet = 0;
lRet = ???? here is the part where i want to wait for a an answer from CAN2
if(lRet != 0)
{
TestStepPass("010.1", "SNA was triggered");
}
else
{
TestStepFail("010.1", "Timeout was triggered, but no SNA was found");
}
}
else
{
botschaftWirdGesendet = 1;
output(sendingCan_BrkSys);
lRet = TestGetWaitEventMsgData(receivingCan_aMessage);
if(lRet == 0)
{
// same for the positive case
}
}
anzahlAnBotschaften -- ;
setTimer(botschaftsAusfall,20);
}
}
What's the Problem? Just use CAPL-function testWaitForMessage as described in help.
You are using Test-Node as there is TestStepFail/Pass call in your code, so everything you need in terms of control your test-sequence begins with test...
p.s. something else, I doubt that with this code you can detect what is described in comment
if(anzahlAnBotschaften % 3 == 0) // 3 times message lost
anzahlAnBotschaften = in german this means the count of received messages. So when, as described above, you will receive 7 from 10 messages (anzahlAnBotschaften == 7) than this condition is false.

GCD - Critical Section/Mutex

Can somebody answer with short example:
How to correctly Lock code part with condition: if this part is locked by some thread don't hold other threads just skip this part by other threads and keep going.
ok, here is the working example (credit goes to #KenThomases ...)
import Dispatch
let semaphore = DispatchSemaphore(value: 1)
let printQueue = DispatchQueue(label: "print queue")
let group = DispatchGroup()
func longRuningTask(i: Int) {
printQueue.async(group: group) {
print(i,"GREEN semaphore")
}
usleep(1000) // cca 1 milisecond
printQueue.async(group: group) {
print(i,"job done")
}
}
func shortRuningTask(i: Int) {
group.enter()
guard semaphore.wait(timeout: .now() + 0.001) == .success else { // wait for cca 1 milisecond from now
printQueue.async(group: group) {
print(i,"RED semaphore, job not done")
}
group.leave()
return
}
longRuningTask(i: i)
semaphore.signal()
group.leave()
}
printQueue.async(group: group) {
print("running")
}
DispatchQueue.concurrentPerform(iterations: 10, execute: shortRuningTask )
group.wait()
print("all done")
and its printout
running
0 GREEN semaphore
2 RED semaphore, job not done
1 RED semaphore, job not done
3 RED semaphore, job not done
0 job done
4 GREEN semaphore
5 RED semaphore, job not done
6 RED semaphore, job not done
7 RED semaphore, job not done
4 job done
8 GREEN semaphore
9 RED semaphore, job not done
8 job done
all done
Program ended with exit code: 0

How can I model this code in promela/SP?

The following algorithm attempts to enforce mutual exclusion
between two processes P1 and P2 each of which runs the code below.
You can assume that initially sema = 0.
while true do{
atomic{if sema = 0
then sema:= 1
else
go to line 2}
critical section;
sema:= 0;
}
How Can I model this code in promela/SPIN?
Thank you.
This should be quite straightforward:
active proctype P() {
bit sema = 0;
do
:: atomic {
sema == 0 -> sema = 1
}
// critical section
sema = 0
od
}
Possibly you do not need the do loop, if in your code you only needed it for some active waiting. The atomic block is only executable if sema is set to 0, and then it executes at once. Spin has a built-in passive waiting semantics.

How to change from one musicSequence to another without time delay

I'm playing a MIDI sequence via MusicPlayer which I loaded from a MIDI file and I want to change the sequence to another while playback.
When I try this:
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
it stops the playback. So I start it back again and set the time with
MusicPlayerSetTime(_player, currentTime);
so it plays again where the previous sequence stopped, but there is a little delay.
I've tried to add the time interval to currentTime, which I got by obtaining the time before stopping and after starting again. But there is still a delay.
I was wondering if there is an alternative to stopping -> changing sequence -> starting again.
You definitely need to manage the AUSamplers if you are adding and removing tracks or switching sequences. It probably is cleaner to dispose of the AUSampler and create a new one for each new track but it is also possible to 'recycle' AUSamplers but that means you will need to keep track of them.
Managing AUSamplers means that when you are no longer using an instance of one (for example if you delete or replace a MusicTrack), you need to disconnect it from the AUMixer instance, remove it from the AUGraph instance, and then update the AUGraph.
There are lots of ways to handle all this. For convenience in keeping track of AUSampler instances' bus number, sound font loaded and some other stuff, I use a subClass of NSObject named SamplerAudioUnitto contain all the needed properties and methods. Same for MusicTracks - I have a Track class - but this may not be needed in your project.
The gist though is that AUSamplers need to be managed for performance and memory. If an instance is no longer being used it should be removed and the AUMixer bus input freed up.
BTW - I check the docs and there is apparently no technical limit to the number of mixer busses - but the number does need to be specified.
// this is not cut and paste code - just an example of managing the AUSampler instance
- (OSStatus)deleteTrack:(Track*) trackObj
{
OSStatus result = noErr;
// turn off MP if playing
BOOL MPstate = [self isPlaying];
if (MPstate){
MusicPlayerStop(player);
}
//-disconnect node from mixer + update list of mixer buses
SamplerAudioUnit * samplerObj = trackObj.sampler;
UInt32 busNumber = samplerObj.busNumber;
result = AUGraphDisconnectNodeInput(graph, mixerNode, busNumber);
if (result) {[self printErrorMessage: #"AUGraphDisconnectNodeInput" withStatus: result];}
[self clearMixerBusState: busNumber]; // routine that keeps track of available busses
result = MusicSequenceDisposeTrack(sequence, trackObj.track);
if (result) {[self printErrorMessage: #"MusicSequenceDisposeTrack" withStatus: result];}
// remove AUSampler node
result = AUGraphRemoveNode(graph, samplerObj.samplerNode);
if (result) {[self printErrorMessage: #"AUGraphRemoveNode" withStatus: result];}
result = AUGraphUpdate(graph, NULL);
if (result) {[self printErrorMessage: #"AUGraphUpdate" withStatus: result];}
samplerObj = nil;
trackObj = nil;
if (MPstate){
MusicPlayerStart(player);
}
// CAShow(graph);
// CAShow(sequence);
return result;
}
Because
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
will still cause the player to stop, it is still possible to hear a little break.
So instead of updating the musicSequence, i went ahead and changed the content of the tracks instead, which won't cause any breaks:
MusicTrack currentTrack;
MusicTrack currentTrack2;
MusicSequenceGetIndTrack(musicSequence, 0, &currentTrack);
MusicSequenceGetIndTrack(musicSequence, 1, &currentTrack2);
MusicTrackClear(currentTrack, 0, _trackLen);
MusicTrackClear(currentTrack2, 0, _trackLen);
MusicSequence tmpSequence;
switch (number) {
case 0:
tmpSequence = musicSequence1;
break;
case 1:
tmpSequence = musicSequence2;
break;
case 2:
tmpSequence = musicSequence3;
break;
case 3:
tmpSequence = musicSequence4;
break;
default:
tmpSequence = musicSequence1;
break;
}
MusicTrack tmpTrack;
MusicTrack tmpTrack2;
MusicSequenceGetIndTrack(tmpSequence, 0, &tmpTrack);
MusicSequenceGetIndTrack(tmpSequence, 1, &tmpTrack2);
MusicTimeStamp trackLen = 0;
UInt32 trackLenLenLen = sizeof(trackLen);
MusicTrackGetProperty(tmpTrack, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLenLen);
_trackLen = trackLen;
MusicTrackCopyInsert(tmpTrack, 0, _trackLen, currentTrack, 0);
MusicTrackCopyInsert(tmpTrack2, 0, _trackLen, currentTrack2, 0);
No disconnection of nodes, no updating the graph, no stopping the player.

AudioQueue fails to start

I create an AudioQueue in the following steps.
Create a new output with AudioQueueNewOutput
Add a property listener for the kAudioQueueProperty_IsRunning property
Allocate my buffers with AudioQueueAllocateBuffer
Call AudioQueuePrime
Call AudioQueueStart
The problem is, when i call AudioQueuePrime it outputs following error on the console
AudioConverterNew returned -50
Prime failed (-50); will stop (11025/0 frames)
What's wrong here?
PS:
I got this error on iOS (Device & Simulator)
The output callback installed when calling AudioQueueNewOutput is never called!
The file is valid and the AudioStreamBasicDescription does match the format (AAC)
I tested the file with Mat's AudioStreamer and it seems to work there
Sample Init Code:
// Get the stream description from the first sample buffer
OSStatus err = noErr;
EDSampleBuffer *firstBuf = [sampleBufs objectAtIndex:0];
AudioStreamBasicDescription asbd = firstBuf.streamDescription;
// TODO: remove temporary format setup, just to ensure format for now
asbd.mSampleRate = 44100.00;
asbd.mFramesPerPacket = 1024; // AAC default
asbd.mChannelsPerFrame = 2;
pfcc(asbd.mFormatID);
// -----------------------------------
// Create a new output
err = AudioQueueNewOutput(&asbd, _audioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue);
if (err) {
[self _reportError:kASAudioQueueInitializationError];
goto bail;
}
// Add property listener for queue state
err = AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, _audioQueueIsRunningCallback, self);
if (err) {
[self _reportError:kASAudioQueuePropertyListenerError];
goto bail;
}
// Allocate a queue buffers
for (int i=0; i<kAQNumBufs; i++) {
err = AudioQueueAllocateBuffer(audioQueue, kAQDefaultBufSize, &queueBuffer[i]);
if (err) {
[self _reportError:kASAudioQueueBufferAllocationError];
goto bail;
}
}
// Prime and start
err = AudioQueuePrime(audioQueue, 0, NULL);
if (err) {
printf("failed to prime audio queue %ld\n", err);
goto bail;
}
err = AudioQueueStart(audioQueue, NULL);
if (err) {
printf("failed to start audio queue %ld\n", err);
goto bail;
}
These are the format flags from the audio file stream
rate: 44100.000000
framesPerPacket: 1024
format: aac
bitsPerChannel: 0
reserved: 0
channelsPerFrame: 2
bytesPerFrame: 0
bytesPerPacket: 0
formatFlags: 0
cookieSize 39
AudioConverterNew returned -50
Prime failed (-50); will stop (11025/0 frames)
What's wrong here?
You did it wrong.
No, really. That's what that error means, and that's ALL that error means.
That's why paramErr (-50) is such an annoying error code: It doesn't say a damn thing about what you (or anyone else) did wrong.
The first step to formulating guesses as to what it's complaining about is to find out what function returned that error. Change your _reportError: method to enable you to log the name of the function that returned the error. Then, log the parameters you're passing to that function and figure out why it's of the opinion that those parameters to that function don't make sense.
My own wild guess is that it's because the values you forced into the ASBD don't match the characteristics of the sample buffer. The log output you included in your question says “11025/0 frames”; 11025 is a common sample rate (but different from 44100). I assume you'd know what the 0 refers to.