Timecode referenced from computer clock - objective-c

I basically want to press a button, that starts timecode at 30fps. (called every 1/30th of a second). I want the timecode to be referenced to the clock built in to the computer. I could easily get the current time in HH:mm:ss using NSDate, but I need the counter to start from zero and implement frames- formatted like HH:mm:ss:ff
Thoughts?

Use a CVDisplayLink to generate a pulse with the video card's accuracy, this will be much more accurate than an NSTimer or a dispatch queue. CoreMedia/CoreVideo also talks SMPTE natively.
CVReturn MyDisplayCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext) {
CVSMPTETime timecodeNow = inNow->smpteTime; // it's that easy!
DoStuffWith(timecodeNow); // you might have to modulo this run a bit if the display framerate is greater than 30fps.
return kCVReturnSuccess;
}
CVDisplayLinkRef _link;
CVDisplayLinkCreateWithCGDisplay(CGMainDisplayID(),&_link);
CVDisplayLinkSetOutputCallback(_link, MyDisplayCallback, NULL);
CVDisplayLinkStart(_link);
EDIT: After playing with this a bit, I've noticed that the SMPTE fields from the displaylink aren't getting filled out, but OTOH the host time is accurate. Just use:
inNow->videoTime / inNow->videoTimeScale;
to obtain the number of seconds uptime, and
inNow->videTime % inNow->videoTimeScale
to get the remainder.
Here's as far as I got:
#implementation JHDLTAppDelegate
CVReturn MYCGCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext) {
dispatch_async(dispatch_get_main_queue(), ^{
JHDLTAppDelegate *obj = (__bridge JHDLTAppDelegate *)displayLinkContext;
uint64_t seconds = inNow->videoTime / inNow->videoTimeScale;
[obj.outputView setStringValue:[NSString stringWithFormat:#"days: %llu/hours: %llu/seconds: %llu (%llu:%u)",
seconds / (3600 * 24),
seconds / 3600,
seconds,
inNow->videoTime, inNow->videoTimeScale]];
});
return kCVReturnSuccess;
}
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
CVDisplayLinkCreateWithCGDisplay(CGMainDisplayID(), &_ref);
CVDisplayLinkSetOutputCallback(_ref, MYCGCallback, (__bridge void *)self);
CVDisplayLinkStart(_ref);
}
- (void)dealloc
{
CVDisplayLinkStop(_ref);
CVDisplayLinkRelease(_ref);
}
#end

This should work using an NSTimer object and doing the visual updates in the invocation. You can set the timer to fire every 3.3333 milliseconds. The only problem I see is over very long stretches, the timecode will be slightly off. Also, if this is video related, be careful because some video is encoded at 24 fps. I would then have a counter do a +1 in the method fired by the timer unless counter = 30, then I would have it reset to 1. You should be able to initialize an NSDateFormatter object with a custom format string to insert the current time and your counter variable in the format you want to display to the user.

Related

Changing Sine Wave frequencies in the same AVAudioPCMBuffer

I've been working on getting a clean sine wave sound that can change frequencies when different notes are played. From what I've understood, I need to resize the buffer's frameLength relative to the frequency to avoid those popping sounds caused when the frame ends on a sine's peak.
So on every iteration, I set the frameLength and then populate buffer with the signal.
AVAudioPlayerNode *audioPlayer = [[AVAudioPlayerNode alloc] init];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:[audioPlayer outputFormatForBus:0] frameCapacity:44100*10];`
while(YES){
AVAudioFrameCount frameCount = ceil(44100.0/osc.frequency);
[buffer setFrameLength:frameCount];
[audioPlayer scheduleBuffer:buffer atTime:0 options:AVAudioPlayerNodeBufferLoops completionHandler:nil];
for(int i = 0; i < [buffer frameLength]; i++){
for (int channelNumber = 0; channelNumber < channelCount ; channelNumber++) {
float * const channelBuffer = floatChannelData[channelNumber];
channelBuffer[i] = [self getSignalOnFrame:i];
}
}
}
where the signal is generated from:
(float)getSignalOnFrame:(int)i {
float sampleRate = 44100.0;
return [osc amplitude] * sinf([osc frequency] * i * 2.0 * M_PI / sampleRate);
}
The starting tone sounds fine and there are no popping sounds when notes change but the notes themselves sound like they're being turned into sawtooth waves or something.
Any ideas on what I might be missing here?
Or should I just create a whole new audioPlayer with a fresh buffer for each note played?
Thanks for any advice!
If the buffers are contiguous, then a better method to not have discontinuities in sine wave generation is to remember the phase of the sinewave at the end of one buffer, and use that phase as the starting point (angle) to generate the next buffer.
If the buffers are not contiguous, then a common way to avoid clicks is to gradually taper the first and last few milliseconds of each buffer from full gain to zero. A linear gain taper will do, but a raised cosine taper is a slightly smoother taper.

Objective-C block doesn't skips code and then later executes it

I'm using the GPUImage framework and I've noticed that the compiler automatically skips everything that is within the brackets of the setColorAverageProcessingFinishedBlock. It completely skips over these contents and continues on, executing everything else in the code. Once everything else has been executed, it comes back to the content within the brackets. Obviously, this has unintended side effects.
NSMutableArray *redValues = [NSMutableArray array];
NSMutableArray *arrayOne = [NSMutableArray array];
NSUInteger arrayOneLength = [arrayOne count];
__block int counter = 0;
int amount = 1;
float totalOne, diffForAverage;
NSInteger j;
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageAverageColor *averageColor = [[GPUImageAverageColor alloc] init];
[averageColor setColorAverageProcessingFinishedBlock:^(CGFloat redComponent, CGFloat greenComponent, CGFloat blueComponent, CGFloat alphaComponent, CMTime frameTime)
{ // the compiler runs until here, then skips everything within these brackets
NSLog(#"%f", redComponent);
[redValues addObject:#(redComponent * 255)];
}]; // after the brackets close, it executes everything that is below this
// once everything below this has been executed, it goes back to the brackets and executes
// everything between them
[videoCamera addTarget:averageColor];
[videoCamera startCameraCapture];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 27 * NSEC_PER_SEC), dispatch_get_main_queue(), ^{
[videoCamera stopCameraCapture];
});
totalOne = [redValues[24] floatValue];
float average = totalOne / amount;
NSUInteger redValuesLength = [redValues count];
for (j = (counter + 24); j < (redValuesLength - 24); j++)
{
diffForAverage = average - [redValues[j + 1] floatValue];
if (diffForAverage > -1 && diffForAverage < 1)
{
totalOne += [redValues[j + 1] floatValue];
amount++;
[arrayOne addObject:[NSNumber numberWithInt:(j - 24)]];
counter++;
}
}
How can I solve this problem?
There are two issues with the above code: a memory management one, and a misunderstanding of how blocks work.
First, you're creating a GPUImageVideoCamera instance within a method, but not retaining it as an instance variable. I'm going to assume this is code using automatic reference counting, and if that's true, this camera instance will be deallocated the instant your method is finished. At best, you'll capture maybe one frame from the camera before this is deallocated. At worst, this will crash as the camera and the entire filter chain attached to it are deallocated mid-operation.
Make an instance variable on your containing class and assign your GPUImageVideoCamera instance to it to have it last long enough to be useful.
The second issue with the above is a misunderstanding about how and when blocks will execute. Blocks are merely sections of code you can pass around, and they don't necessarily execute in serial with the rest of the code around them.
In this case, the block you're providing is a callback that will be triggered after every frame of video is processed through the average color operation. This processing takes place asynchronously on a background queue, and you have to design your code to acknowledge this.
If you want X values to be built up, have each measurement be added to an array inside that block, and then within the block check for X values to be reached. At that point, average and do whatever with them. Basically, add a check within the block and move the code you have after it into the block to be run whenever the count is greater than X. You may wish to stop camera capture at that point, if that's all you need.
The code you post is working exactly as it is supposed to work. The color average processing takes a while so it is done on a background thread so the main thread isn't stalled. After the processing is done, then the block is called.
Any code that shouldn't be executed until after the processing is done needs to go inside the block.

SpriteKit - Logging the debug info such as fps, NodeCount

I know how to display those info in the screen, but I would like to log them in a file/console for offline investigation. How could I do that?
You can measure it yourself using scene - (void)update method. This method calls each frame need to calculate.
- (void)update
{
self.frameCount++; // increase frame count value
uint64_t currentTime = mach_absolute_time(); // get current time
// according to this two values and last update method call time you're already can calculate the fps
self.lastUpdateTick = currentTime; // remember for the future method call
}
To get the nodes count value you should just to count children of all the scene children. It may be some kind of recursive algorithm (not tested).
- (NSUInteger)childrenOf:(SKNode *)node
{
NSUInteger count = 0;
for (SKNode *child in node.children)
count += [self childrenOf:child] + 1;
return count;
}
- (void)calculateSceneChildrenCount
{
NSUInteger count = [self childrenOf:self];
NSLog(#"count is %lu",count);
}

How can I get the current sound level of the current audio output device?

I'm looking for a way to tap into the current audio output on a Mac, then return a value representing the current sound level.
By sound level, I mean the amount of noise being generated by the output. I'm NOT asking how to get the current volume level of the output device.
the following code is pulled from Apple's Sample AVRecorder … this particular bit of code acquires a set of connections from this class's movieFileOutput's connections methods, gets the AVCaptureAudioChannel for each connection, and calculates decibel power based upon that. i would presume that if you are looking for an output "noise level", you would be able to capture similar information. if you are looking for something lower level than this, try the HAL (Hardware Abstraction Layer) framework.
- (void)updateAudioLevels:(NSTimer *)timer
{
NSInteger channelCount = 0;
float decibels = 0.f;
// Sum all of the average power levels and divide by the number of channels
for (AVCaptureConnection *connection in [[self movieFileOutput] connections]) {
for (AVCaptureAudioChannel *audioChannel in [connection audioChannels]) {
decibels += [audioChannel averagePowerLevel];
channelCount += 1;
}
}
decibels /= channelCount;
[[self audioLevelMeter] setFloatValue:(pow(10.f, 0.05f * decibels) * 20.0f)];
}

Get the precise time of system bootup on iOS/OS X

Is there an API to obtain the NSDate or NSTimeInterval representing the time the system booted? Some APIs such as [NSProcessInfo systemUptime] and Core Motion return time since boot. I need to precisely correlate these uptime values with NSDates, to about a millisecond.
Time since boot ostensibly provides more precision, but it's easy to see that NSDate already provides precision on the order of 100 nanoseconds, and anything under a microsecond is just measuring interrupt latency and PCB clock jitter.
The obvious thing is to subtract the uptime from the current time [NSDate date]. But that assumes that time does not change between the two system calls, which is, well, hard to accomplish. Moreover if the thread is preempted between the calls, everything is thrown off. The workaround is to repeat the process several times and use the smallest result, but yuck.
NSDate must have a master offset it uses to generate objects with the current time from the system uptime, is there really no way to obtain it?
In OSX you could use sysctl(). This is how the OSX Unix utility uptime does it. Source code is available - search for boottime.
Fair warning though, in iOS i have no idea if this would work.
UPDATE: found some code :)
#include <sys/types.h>
#include <sys/sysctl.h>
#define MIB_SIZE 2
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1)
{
// successful call
NSDate* bootDate = [NSDate dateWithTimeIntervalSince1970:
boottime.tv_sec + boottime.tv_usec / 1.e6];
}
see if this works...
The accepted answer, using systcl, works, but the values returned by sysctl for KERN_BOOTTIME, at least in my testing (Darwin Kernel Version 11.4.2), are always in whole seconds (the microseconds field, tv_usec, is 0). This means the resulting time may be up to 1 second off, which is not very accurate.
Also, having compared that value, to one derived experimentally from the difference between the REALTIME_CLOCK and CALENDAR_CLOCK, they sometimes differ by a couple seconds, so its not clear whether the KERN_BOOTTIME value corresponds exactly to the time-basis for the uptime clocks.
There is another way. It could give result slightly different (less or more) than accepted answer
I have compared them. I get difference -7 second for OSX 10.9.3 and +2 second for iOS 7.1.1
As i understand this way gives same result if wall clock changed, but accepted answer gives different results if wall clock changed...
Here code:
static CFAbsoluteTime getKernelTaskStartTime(void) {
enum { MICROSECONDS_IN_SEC = 1000 * 1000 };
struct kinfo_proc info;
bzero(&info, sizeof(info));
// Initialize mib, which tells sysctl the info we want, in this case
// we're looking for information about a specific process ID = 0.
int mib[] = {CTL_KERN, KERN_PROC, KERN_PROC_PID, 0};
// Call sysctl.
size_t size = sizeof(info);
const int sysctlResult = sysctl(mib, COUNT_ARRAY_ELEMS(mib), &info, &size, NULL, 0);
assert(0 != sysctlResult);
const struct timeval * timeVal = &(info.kp_proc.p_starttime);
NSTimeInterval result = -kCFAbsoluteTimeIntervalSince1970;
result += timeVal->tv_sec;
result += timeVal->tv_usec / (double)MICROSECONDS_IN_SEC;
return result;
}
Refer to this category
NSDate+BootTime.h
#import <Foundation/Foundation.h>
#interface NSDate (BootTime)
+ (NSDate *)bootTime;
+ (NSTimeInterval)bootTimeTimeIntervalSinceReferenceDate;
#end
NSDate+BootTime.m
#import "NSDate+BootTime.h"
#include <sys/types.h>
#include <sys/sysctl.h>
#implementation NSDate (BootTime)
+ (NSDate *)bootTime {
return [NSDate dateWithTimeIntervalSinceReferenceDate:[NSDate bootTimeTimeIntervalSinceReferenceDate]];
}
+ (NSTimeInterval)bootTimeTimeIntervalSinceReferenceDate {
return getKernelTaskStartTime();
}
////////////////////////////////////////////////////////////////////////
#pragma mark - Private
////////////////////////////////////////////////////////////////////////
#define COUNT_ARRAY_ELEMS(arr) sizeof(arr)/sizeof(arr[0])
static CFAbsoluteTime getKernelTaskStartTime(void) {
enum { MICROSECONDS_IN_SEC = 1000 * 1000 };
struct kinfo_proc info;
bzero(&info, sizeof(info));
// Initialize mib, which tells sysctl the info we want, in this case
// we're looking for information about a specific process ID = 0.
int mib[] = {CTL_KERN, KERN_PROC, KERN_PROC_PID, 0};
// Call sysctl.
size_t size = sizeof(info);
const int sysctlResult = sysctl(mib, COUNT_ARRAY_ELEMS(mib), &info, &size, NULL, 0);
if (sysctlResult != -1) {
const struct timeval * timeVal = &(info.kp_proc.p_starttime);
NSTimeInterval result = -kCFAbsoluteTimeIntervalSince1970;
result += timeVal->tv_sec;
result += timeVal->tv_usec / (double)MICROSECONDS_IN_SEC;
return result;
}
return 0;
}
#end
The routines inside mach/mach_time.h are guaranteed to be monotonically increasing, unlike NSDate.