Generating Morse code style tones in objective-c - objective-c

I have a class that will let me play a tone using audio units, what i would like to be able to do is have the class play morse code style when i send the class a phrase or letter.
How would i go about this? I'm hoping someone can point me in the right direction. i have included the tone generator .h and .m files below
//
// Singer.h
// musiculesdev
//
// Created by Dylan on 2/20/09.
// Copyright 2009 __MyCompanyName__. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <AudioUnit/AudioUnit.h>
#interface Singer : NSObject {
AudioComponentInstance audioUnit;
}
-(void)initAudio; // put this in init?
-(void)start;
-(void)stop;
-(IBAction)turnOnSound:(id)sender;
#end
//
// Singer.m
// musiculesdev
//
// Created by Dylan on 2/20/09.
// Copyright 2009 __MyCompanyName__. All rights reserved.
//
#import <AudioUnit/AudioUnit.h>
#import <math.h>
#import "Singer.h"
#define kOutputBus 0
#define kSampleRate 44100
//44100.0f
#define kWaveform (M_PI * 2.0f / kSampleRate)
#implementation Singer
OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
//Singer *me = (Singer *)inRefCon;
static int phase = 0;
for(UInt32 i = 0; i < ioData->mNumberBuffers; i++) {
int samples = ioData->mBuffers[i].mDataByteSize / sizeof(SInt16);
SInt16 values[samples];
float waves;
for(int j = 0; j < samples; j++) {
waves = 0;
waves += sin(kWaveform * 261.63f * phase);
waves += sin(kWaveform * 120.0f * phase);
waves += sin(kWaveform * 1760.3f * phase);
waves += sin(kWaveform * 880.0f * phase);
waves *= 32500 / 4; // <--------- make sure to divide by how many waves you're stacking
values[j] = (SInt16)waves;
values[j] += values[j]<<16;
phase++;
}
memcpy(ioData->mBuffers[i].mData, values, samples * sizeof(SInt16));
}
return noErr;
}
-(IBAction)turnOnSound:(id)sender {
Singer *singer = [[Singer alloc] init];
[singer start];
}
-(id)init {
NSLog(#"In the singer init!!");
if(self = [super init]) {
[self initAudio];
}
return self;
}
-(void)initAudio {
OSStatus status;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent outputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(outputComponent, &audioUnit);
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag));
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = kSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &audioFormat, sizeof(audioFormat));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct));
status = AudioUnitInitialize(audioUnit);
}
-(void)start {
OSStatus status;
status = AudioOutputUnitStart(audioUnit);
}
-(void)stop {
OSStatus status;
status = AudioOutputUnitStop(audioUnit);
}
-(void)dealloc {
AudioUnitUninitialize(audioUnit);
[super dealloc];
}
#end

You need to be able to generate tones of a specific duration, separated by silences of a specific duration. So long as you have these two building blocks you can send morse code:
dot = 1 unit
dash = 3 units
space between dots/dashes within a letter = 1 unit
space between letters = 3 units
space between words = 5 units
The length of unit determines the overall speed of the morse code. Start with e.g. 50 ms.
The tone should just be a pure sine wave at an appropriate frequency, e.g. 400 Hz. The silence can just be an alternate buffer containing all zeroes. That way you can "play" both the tone and the silence using the same API, without worrying about timing/synchronisation, etc.

Related

How to play pcm audio buffer from a socket server using audio unit circular buffer

I hope someone can help me. I am new to Objective-c and OSX and I am trying to play audio data I am receiving via socket into my audio queue. I found out this link https://stackoverflow.com/a/30318859/4274654 which in away address my issue with circular buffer.
However when I try to run my project it returns
It returns an error (OSStatus) -10865. That is why the code logs " Error enabling AudioUnit output bus".
status = AudioUnitSetProperty(_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &one, sizeof(one));
Here is my code:
Test.h
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import "TPCircularBuffer.h"
#interface Test : Communicator
#property (nonatomic) AudioComponentInstance audioUnit;
#property (nonatomic) TPCircularBuffer circularBuffer;
-(TPCircularBuffer *) outputShouldUseCircularBuffer;
-(void) start;
#end
Test.m
#import "Test.h"
#define kOutputBus 0
#define kInputBus 1
#implementation Test{
BOOL stopped;
}
static OSStatus OutputRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
Test *output = (__bridge Test*)inRefCon;
TPCircularBuffer *circularBuffer = [output outputShouldUseCircularBuffer];
if( !circularBuffer ){
SInt32 *left = (SInt32*)ioData->mBuffers[0].mData;
for(int i = 0; i < inNumberFrames; i++ ){
left[ i ] = 0.0f;
}
return noErr;
};
int32_t bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16* outputBuffer = ioData->mBuffers[0].mData;
uint32_t availableBytes;
SInt16 *sourceBuffer = TPCircularBufferTail(circularBuffer, &availableBytes);
int32_t amount = MIN(bytesToCopy,availableBytes);
memcpy(outputBuffer, sourceBuffer, amount);
TPCircularBufferConsume(circularBuffer,amount);
return noErr;
}
-(void) start
{
[self circularBuffer:&_circularBuffer withSize:24576*5];
stopped = NO;
[self setupAudioUnit];
// [super setup:#"http://localhost" port:5321];
}
-(void) setupAudioUnit
{
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
OSStatus status;
status = AudioComponentInstanceNew(comp, &_audioUnit);
if(status != noErr)
{
NSLog(#"Error creating AudioUnit instance");
}
// Enable input and output on AURemoteIO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element
UInt32 one = 1;
status = AudioUnitSetProperty(_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &one, sizeof(one));
if(status != noErr)
{
NSLog(#"Error enableling AudioUnit output bus");
}
// Explicitly set the input and output client formats
// sample rate = 44100, num channels = 1, format = 16 bit int point
AudioStreamBasicDescription audioFormat = [self getAudioDescription];
status = AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &audioFormat, sizeof(audioFormat));
if(status != noErr)
{
NSLog(#"Error setting audio format");
}
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = OutputRenderCallback;
renderCallback.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &renderCallback, sizeof(renderCallback));
if(status != noErr)
{
NSLog(#"Error setting rendering callback");
}
// Initialize the AURemoteIO instance
status = AudioUnitInitialize(_audioUnit);
if(status != noErr)
{
NSLog(#"Error initializing audio unit");
}
}
- (AudioStreamBasicDescription)getAudioDescription {
AudioStreamBasicDescription audioDescription = {0};
audioDescription.mFormatID = kAudioFormatLinearPCM;
audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame = 1;
audioDescription.mBytesPerPacket = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket = 1;
audioDescription.mBytesPerFrame = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
audioDescription.mSampleRate = 44100.0;
return audioDescription;
}
-(void)circularBuffer:(TPCircularBuffer *)circularBuffer withSize:(int)size {
TPCircularBufferInit(circularBuffer,size);
}
-(void)appendDataToCircularBuffer:(TPCircularBuffer*)circularBuffer
fromAudioBufferList:(AudioBufferList*)audioBufferList {
TPCircularBufferProduceBytes(circularBuffer,
audioBufferList->mBuffers[0].mData,
audioBufferList->mBuffers[0].mDataByteSize);
}
-(void)freeCircularBuffer:(TPCircularBuffer *)circularBuffer {
TPCircularBufferClear(circularBuffer);
TPCircularBufferCleanup(circularBuffer);
}
-(TPCircularBuffer *) outputShouldUseCircularBuffer
{
return &_circularBuffer;
}
-(void) stop
{
OSStatus status = AudioOutputUnitStop(_audioUnit);
if(status != noErr)
{
NSLog(#"Error stopping audio unit");
}
TPCircularBufferClear(&_circularBuffer);
_audioUnit = nil;
stopped = YES;
}
-(void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)event{
switch (event) {
case NSStreamEventOpenCompleted:
NSLog(#"Stream opened");
break;
case NSStreamEventHasBytesAvailable:
if (stream == [super inputStream]) {
NSLog(#"NSStreamEventHasBytesAvailable");
uint8_t buffer[1024];
NSUInteger len;
while ([[super inputStream] hasBytesAvailable]) {
len = [[super inputStream] read:buffer maxLength:sizeof(buffer)];
if (len > 0) {
//converting buffer to byte data
NSString *output = [[NSString alloc] initWithBytes:buffer length:len encoding:NSASCIIStringEncoding];
if (nil != output) {
//NSLog(#"server overideddddd said: %#", output);
}
NSData *data0 = [[NSData alloc] initWithBytes:buffer length:len];
if (nil != data0) {
SInt16* byteData = (SInt16*)malloc(len);
memcpy(byteData, [data0 bytes], len);
double sum = 0.0;
for(int i = 0; i < len/2; i++) {
sum += byteData[i] * byteData[i];
}
Byte* soundData = (Byte*)malloc(len);
memcpy(soundData, [data0 bytes], len);
if(soundData)
{
AudioBufferList *theDataBuffer = (AudioBufferList*) malloc(sizeof(AudioBufferList) *1);
theDataBuffer->mNumberBuffers = 1;
theDataBuffer->mBuffers[0].mDataByteSize = (UInt32)len;
theDataBuffer->mBuffers[0].mNumberChannels = 1;
theDataBuffer->mBuffers[0].mData = (SInt16*)soundData;
NSLog(#"soundData here");
[self appendDataToCircularBuffer:&_circularBuffer fromAudioBufferList:theDataBuffer];
}
}
}
}
}
break;
case NSStreamEventErrorOccurred:
NSLog(#"Can't connect to server");
break;
case NSStreamEventEndEncountered:
[stream close];
[stream removeFromRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
break;
default:
NSLog(#"Unknown event");
}
[super stream:stream handleEvent:event];
}
#end
I would highly appreciate if there is any one with an example of playing buffers returned from a socket server into audio queue so that I can be able to listen to sound as it comes from the socket server.
Thanks
Your code seems to be asking for a kAudioUnitSubType_VoiceProcessingIO audio unit. But kAudioUnitSubType_RemoteIO would be a more suitable iOS audio unit for just playing buffers of audio samples.
Also, your code does not seem to first select an appropriate audio session category and activate it before playing audio. See Apple's documentation for doing this: https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html

Sprite Collision With Three Different Sprites?

I've created three sprites that works as walls in my scene. Then I have a sprite and a score. I want the sprite to set the score to 0 only when it touches the floor(one of the three sprites). So that's what I have for the contact.
- (void)didBeginContact:(SKPhysicsContact *)contact
{
SKPhysicsBody *firstBody, *secondBody;
if (contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask)
{
firstBody = contact.bodyA;
secondBody = contact.bodyB;
}
else
{
firstBody = contact.bodyB;
secondBody = contact.bodyA;
}
if ((firstBody.categoryBitMask & shipCategory) != 0 &&
(secondBody.categoryBitMask & obstacleCategory) != 0)
{
score = 0;
myLabel.text = [NSString stringWithFormat:#"%i", score];
}
}
Here are the categoryBitMask
static const uint32_t shipCategory = 0x1 << 1;
static const uint32_t obstacleCategory = 0x1 << 1;
static const uint32_t wallCategory = 0x1 << 1;
Theese are the codes for the sprite, the floor and the walls
-(SKSpriteNode *)floorNode
{
floorNode = [SKSpriteNode spriteNodeWithImageNamed:#"rectangle.png"];
floorNode.position = CGPointMake(160,100);
floorNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:floorNode.size];
floorNode.physicsBody.categoryBitMask = obstacleCategory;
floorNode.physicsBody.contactTestBitMask = shipCategory;
fireNode.physicsBody.usesPreciseCollisionDetection = YES;
fireNode.physicsBody.collisionBitMask = 2;
floorNode.physicsBody.dynamic = NO;
floorNode.zPosition = 1.0;
return floorNode;
}
-(SKSpriteNode *)walldxNode
{
walldxNode = [SKSpriteNode spriteNodeWithImageNamed:#"wall.png"];
walldxNode.position = CGPointMake(30, 568);
walldxNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:walldxNode.size];
walldxNode.physicsBody.categoryBitMask = wallCategory;
walldxNode.physicsBody.dynamic = NO;
return walldxNode;
}
-(SKSpriteNode *)wallsxNode
{
wallsxNode = [SKSpriteNode spriteNodeWithImageNamed:#"wall.png"];
wallsxNode.position = CGPointMake(290, 568);
wallsxNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:wallsxNode.size];
wallsxNode.physicsBody.categoryBitMask = wallCategory;
wallsxNode.physicsBody.dynamic = NO;
return wallsxNode;
}
-(SKSpriteNode *)fireButtonNode
{
fireNode = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship.png"];
fireNode.position = CGPointMake(160,450);
fireNode.xScale = 0.32;
fireNode.yScale = 0.32;
fireNode.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius: fireNode.size.height/2];
fireNode.physicsBody.categoryBitMask = shipCategory;
fireNode.physicsBody.dynamic = YES;
fireNode.physicsBody.contactTestBitMask = obstacleCategory;
fireNode.physicsBody.collisionBitMask = 2;
fireNode.physicsBody.usesPreciseCollisionDetection = YES;
fireNode.name = #"fireButtonNode";//how the node is identified later
fireNode.zPosition = 2.0;
return fireNode;
}
The problem is that the sprite sets the score to 0 also when collides with the others two walls, which have differents categoryBitMask. I don't know what to do.
Your category bitmasks all have the same values. Make them differ like this:
static const uint32_t shipCategory = 0x1 << 1; // this equals 2
static const uint32_t obstacleCategory = 0x1 << 2; // this equals 4
static const uint32_t wallCategory = 0x1 << 3; // this equals 8
Batalia is correct, but these days you should probably use an enum rather than some crusty old static const. I did it in some code like this and then filed a bug against the sample code for not using Apple's own current best practices:
// These constans are used to define the physics interactions between physics bodies in the scene.
typedef NS_OPTIONS(NSUInteger, RockBusterCollionsMask) {
RBCmissileCategory = 1 << 0,
RBCasteroidCategory = 1 << 1,
RBCshipCategory = 1 << 2
};

Segmentation Fault 11 | CGEventTap application stops processing mouse events after arbitrary amount of time.

The purpose of this application is to run in the background 24/7 and lock the mouse in the center of the screen. It's for work with a series of flash programs to simulate joystick-style movement for the mouse. I've already attempted to use other methods built into Cocoa/Quartz in order to accomplish this, and none of them worked for my purpose, so this is the way I have to do it.
I have been trying to figure out why, after a seemingly random amount of time, this program simply stops restricting the mouse. The program doesn't give an error or anything like that, it just stops working. The force-quit screen DOES say "Not Responding", however, many of my mouse modifying scripts, including this one, always read as "not responding" and they keep functioning.
Here's the code:
code removed, check below for updated code.
Final Update
Ken Thomases gave me the right answer, I've updated my code based on his suggestions.
Here's the final code that I've gotten to work flawlessly (this ran for 12+ hours without a hitch before I manually stopped it):
#import <Cocoa/Cocoa.h>
#import <CoreMedia/CoreMedia.h>
int screen_width, screen_height;
struct event_tap_data_struct {
CFMachPortRef event_tap;
float speed_modifier;
};
CGEventRef
mouse_filter(CGEventTapProxy proxy, CGEventType type, CGEventRef event, struct event_tap_data_struct *);
int
screen_res(int);
int
main(int argc, char *argv[]) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
screen_width = screen_res(0);
screen_height = screen_res(1);
CFRunLoopSourceRef runLoopSource;
CGEventMask event_mask = kCGEventMaskForAllEvents;
CGSetLocalEventsSuppressionInterval(0);
CFMachPortRef eventTap;
struct event_tap_data_struct event_tap_data = {eventTap,0.2};
eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, 0, event_mask, mouse_filter, &event_tap_data);
event_tap_data.event_tap = eventTap;
if (!eventTap) {
NSLog(#"Couldn't create event tap!");
exit(1);
}
runLoopSource = CFMachPortCreateRunLoopSource(kCFAllocatorDefault, event_tap_data.event_tap, 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopCommonModes);
CGEventTapEnable(event_tap_data.event_tap, true);
CFRunLoopRun();
CFRelease(eventTap);
CFRelease(runLoopSource);
[pool release];
exit(0);
}
int
screen_res(int width_or_height) {
NSRect screenRect;
NSArray *screenArray = [NSScreen screens];
unsigned screenCount = (unsigned)[screenArray count];
for (unsigned index = 0; index < screenCount; index++)
{
NSScreen *screen = [screenArray objectAtIndex: index];
screenRect = [screen visibleFrame];
}
int resolution_array[] = {(int)CGDisplayPixelsWide(CGMainDisplayID()),(int)CGDisplayPixelsHigh(CGMainDisplayID())};
if(width_or_height==0){
return resolution_array[0];
}else {
return resolution_array[1];
}
}
CGEventRef
mouse_filter(CGEventTapProxy proxy, CGEventType type, CGEventRef event, struct event_tap_data_struct *event_tap_data) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
if (type == kCGEventTapDisabledByTimeout || type == kCGEventTapDisabledByUserInput) {
CGEventTapEnable(event_tap_data->event_tap,true);
return event;
} else if (type == kCGEventMouseMoved || type == kCGEventLeftMouseDragged || type == kCGEventRightMouseDragged || type == kCGEventOtherMouseDragged){
CGPoint point = CGEventGetLocation(event);
NSPoint old_point;
CGPoint target;
int tX = point.x;
int tY = point.y;
float oX = screen_width/2;
float oY = screen_height/2;
float dX = tX-oX;
float dY = tY-oY;
old_point.x = floor(oX); old_point.y = floor(oY);
dX*=2, dY*=2;
tX = round(oX + dX);
tY = round(oY + dY);
target = CGPointMake(tX, tY);
CGWarpMouseCursorPosition(old_point);
CGEventSetLocation(event,target);
}
[pool release];
return event;
}
(first) Update:
The program is still crashing, but I have now run it as an executable and received an error code.
When it terminates, the console logs "Segmentation Fault: 11".
I've been trying to discover what this means, however it appears to be an impressively broad term, and I've yet to hone in on something useful.
Here is the new code I am using:
#import <Cocoa/Cocoa.h>
#import <CoreMedia/CoreMedia.h>
int screen_width, screen_height;
struct event_tap_data_struct {
CFMachPortRef event_tap;
float speed_modifier;
};
CGEventRef
mouse_filter(CGEventTapProxy proxy, CGEventType type, CGEventRef event, struct event_tap_data_struct *);
int
screen_res(int);
int
main(int argc, char *argv[]) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
screen_width = screen_res(0);
screen_height = screen_res(1);
CFRunLoopSourceRef runLoopSource;
CGEventMask event_mask;
event_mask = CGEventMaskBit(kCGEventMouseMoved) | CGEventMaskBit(kCGEventLeftMouseDragged) | CGEventMaskBit(kCGEventRightMouseDragged) | CGEventMaskBit(kCGEventOtherMouseDragged);
CGSetLocalEventsSuppressionInterval(0);
CFMachPortRef eventTap;
CFMachPortRef *eventTapPtr = &eventTap;
struct event_tap_data_struct event_tap_data = {*eventTapPtr,0.2};
eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, 0, event_mask, mouse_filter, &event_tap_data);
if (!eventTap) {
NSLog(#"Couldn't create event tap!");
exit(1);
}
runLoopSource = CFMachPortCreateRunLoopSource(kCFAllocatorDefault, eventTap, 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopCommonModes);
CGEventTapEnable(eventTap, true);
CFRunLoopRun();
CFRelease(eventTap);
CFRelease(runLoopSource);
[pool release];
exit(0);
}
int
screen_res(int width_or_height) {
NSRect screenRect;
NSArray *screenArray = [NSScreen screens];
unsigned screenCount = (unsigned)[screenArray count];
for (unsigned index = 0; index < screenCount; index++)
{
NSScreen *screen = [screenArray objectAtIndex: index];
screenRect = [screen visibleFrame];
}
int resolution_array[] = {(int)CGDisplayPixelsWide(CGMainDisplayID()),(int)CGDisplayPixelsHigh(CGMainDisplayID())};
if(width_or_height==0){
return resolution_array[0];
}else {
return resolution_array[1];
}
}
CGEventRef
mouse_filter(CGEventTapProxy proxy, CGEventType type, CGEventRef event, struct event_tap_data_struct *event_tap_data) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
if (type == kCGEventTapDisabledByTimeout || type == kCGEventTapDisabledByUserInput) {
CGEventTapEnable(event_tap_data->event_tap,true);
}
CGPoint point = CGEventGetLocation(event);
NSPoint old_point;
CGPoint target;
int tX = point.x;
int tY = point.y;
float oX = screen_width/2;
float oY = screen_height/2;
float dX = tX-oX;
float dY = tY-oY;
old_point.x = floor(oX); old_point.y = floor(oY);
dX*=2, dY*=2;
tX = round(oX + dX);
tY = round(oY + dY);
target = CGPointMake(tX, tY);
CGWarpMouseCursorPosition(old_point);
CGEventSetLocation(event,target);
[pool release];
return event;
}
You need to re-enable your event tap when it receives kCGEventTapDisabledByTimeout or kCGEventTapDisabledByUserInput.
Update: here are your lines and how they're (failing to) work:
CFMachPortRef eventTap; // uninitialized value
CFMachPortRef *eventTapPtr = &eventTap; // pointer to eventTap
struct event_tap_data_struct event_tap_data = {*eventTapPtr,0.2}; // dereferences pointer, copying uninitialized value into struct
eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, 0, event_mask, mouse_filter, &event_tap_data); // sets eventTap but has no effect on event_tap_data

AudioUnit tone generator is giving me a chirp at the end of each tone generated

I'm creating a old school music emulator for the old GWBasic PLAY command. To that end I have a tone generator and a music player. Between each of the notes played I'm getting a chirp sound that mucking things up. Below are both of my classes:
ToneGen.h
#import <Foundation/Foundation.h>
#interface ToneGen : NSObject
#property (nonatomic) id delegate;
#property (nonatomic) double frequency;
#property (nonatomic) double sampleRate;
#property (nonatomic) double theta;
- (void)play:(float)ms;
- (void)play;
- (void)stop;
#end
ToneGen.m
#import <AudioUnit/AudioUnit.h>
#import "ToneGen.h"
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData);
void ToneInterruptionListener(void *inClientData, UInt32 inInterruptionState);
#interface ToneGen()
#property (nonatomic) AudioComponentInstance toneUnit;
#property (nonatomic) NSTimer *timer;
- (void)createToneUnit;
#end
#implementation ToneGen
#synthesize toneUnit = _toneUnit;
#synthesize timer = _timer;
#synthesize delegate = _delegate;
#synthesize frequency = _frequency;
#synthesize sampleRate = _sampleRate;
#synthesize theta = _theta;
- (id) init
{
self = [super init];
if (self)
{
self.sampleRate = 44100;
self.frequency = 1440.0f;
return self;
}
return nil;
}
- (void)play:(float)ms
{
[self play];
self.timer = [NSTimer scheduledTimerWithTimeInterval:(ms / 100)
target:self
selector:#selector(stop)
userInfo:nil
repeats:NO];
[[NSRunLoop mainRunLoop] addTimer:self.timer forMode:NSRunLoopCommonModes];
}
- (void)play
{
if (!self.toneUnit)
{
[self createToneUnit];
// Stop changing parameters on the unit
OSErr err = AudioUnitInitialize(self.toneUnit);
if (err)
DLog(#"Error initializing unit");
// Start playback
err = AudioOutputUnitStart(self.toneUnit);
if (err)
DLog(#"Error starting unit");
}
}
- (void)stop
{
[self.timer invalidate];
self.timer = nil;
if (self.toneUnit)
{
AudioOutputUnitStop(self.toneUnit);
AudioUnitUninitialize(self.toneUnit);
AudioComponentInstanceDispose(self.toneUnit);
self.toneUnit = nil;
}
if(self.delegate && [self.delegate respondsToSelector:#selector(toneStop)]) {
[self.delegate performSelector:#selector(toneStop)];
}
}
- (void)createToneUnit
{
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_DefaultOutput;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
if (!defaultOutput)
DLog(#"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &_toneUnit);
if (err)
DLog(#"Error creating unit");
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(self.toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
if (err)
DLog(#"Error setting callback");
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = self.sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (self.toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
if (err)
DLog(#"Error setting stream format");
}
#end
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.25;
// Get the tone parameters out of the view controller
ToneGen *toneGen = (__bridge ToneGen *)inRefCon;
double theta = toneGen.theta;
double theta_increment = 2.0 * M_PI * toneGen.frequency / toneGen.sampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
// Store the theta back in the view controller
toneGen.theta = theta;
return noErr;
}
void ToneInterruptionListener(void *inClientData, UInt32 inInterruptionState)
{
ToneGen *toneGen = (__bridge ToneGen *)inClientData;
[toneGen stop];
}
Music.h
#import <Foundation/Foundation.h>
#interface Music : NSObject
- (void) play:(NSString *)music;
- (void) stop;
#end
Music.m
#import "Music.h"
#import "ToneGen.h"
#interface Music()
#property (nonatomic, readonly) ToneGen *toneGen;
#property (nonatomic, assign) int octive;
#property (nonatomic, assign) int tempo;
#property (nonatomic, assign) int length;
#property (nonatomic, strong) NSData *music;
#property (nonatomic, assign) int dataPos;
#property (nonatomic, assign) BOOL isPlaying;
- (void)playNote;
#end
#implementation Music
#synthesize toneGen = _toneGen;
- (ToneGen*)toneGen
{
if (_toneGen == nil)
{
_toneGen = [[ToneGen alloc] init];
_toneGen.delegate = self;
}
return _toneGen;
}
#synthesize octive = _octive;
- (void)setOctive:(int)octive
{
// Sinity Check
if (octive < 0)
octive = 0;
if (octive > 6)
octive = 6;
_octive = octive;
}
#synthesize tempo = _tempo;
- (void)setTempo:(int)tempo
{
// Sinity Check
if (tempo < 30)
tempo = 30;
if (tempo > 255)
tempo = 255;
_tempo = tempo;
}
#synthesize length = _length;
- (void)setLength:(int)length
{
// Sinity Check
if (length < 1)
length = 1;
if (length > 64)
length = 64;
_length = length;
}
#synthesize music = _music;
#synthesize dataPos = _dataPos;
#synthesize isPlaying = _isPlaying;
- (id)init
{
self = [super init];
if (self)
{
self.octive = 4;
self.tempo = 120;
self.length = 1;
return self;
}
return nil;
}
- (void) play:(NSString *)music
{
DLog(#"%#", music);
self.music = [[music stringByReplacingOccurrencesOfString:#"+" withString:#"#"]
dataUsingEncoding: NSASCIIStringEncoding];
self.dataPos = 0;
self.isPlaying = YES;
[self playNote];
}
- (void)stop
{
self.isPlaying = NO;
}
- (void)playNote
{
if (!self.isPlaying)
return;
if (self.dataPos > self.music.length || self.music.length == 0) {
self.isPlaying = NO;
return;
}
unsigned char *data = (unsigned char*)[self.music bytes];
unsigned int code = (unsigned int)data[self.dataPos];
self.dataPos++;
switch (code) {
case 65: // A
case 66: // B
case 67: // C
case 68: // D
case 69: // E
case 70: // F
case 71: // G
{
// Peak at the next char to look for sharp or flat
bool sharp = NO;
bool flat = NO;
if (self.dataPos < self.music.length) {
unsigned int peak = (unsigned int)data[self.dataPos];
if (peak == 35) // #
{
self.dataPos++;
sharp = YES;
}
else if (peak == 45) // -
{
self.dataPos++;
flat = YES;
}
}
// Peak ahead for a length changes
bool look = YES;
int count = 0;
int newLength = 0;
while (self.dataPos < self.music.length && look) {
unsigned int peak = (unsigned int)data[self.dataPos];
if (peak >= 48 && peak <= 57)
{
peak -= 48;
int n = (count * 10);
if (n == 0) { n = 1; }
newLength += peak * n;
self.dataPos++;
} else {
look = NO;
}
}
// Pick the note length
int length = self.length;
if (newLength != 0)
{
DLog(#"InlineLength: %d", newLength);
length = newLength;
}
// Create the note string
NSString *note = [NSString stringWithFormat:#"%c", code];
if (sharp)
note = [note stringByAppendingFormat:#"#"];
else if (flat)
note = [note stringByAppendingFormat:#"-"];
// Set the tone generator freq
[self setFreq:[self getNoteNumber:note]];
// Play the note
[self.toneGen play:(self.tempo / length)];
}
break;
case 76: // L (length)
{
bool look = YES;
int newLength = 0;
while (self.dataPos < self.music.length && look) {
unsigned int peak = (unsigned int)data[self.dataPos];
if (peak >= 48 && peak <= 57)
{
peak -= 48;
newLength = newLength * 10 + peak;
self.dataPos++;
} else {
look = NO;
}
}
self.length = newLength;
DLog(#"Length: %d", self.length);
[self playNote];
}
break;
case 79: // O (octive)
{
bool look = YES;
int newOctive = 0;
while (self.dataPos < self.music.length && look) {
unsigned int peak = (unsigned int)data[self.dataPos];
if (peak >= 48 && peak <= 57)
{
peak -= 48;
newOctive = newOctive * 10 + peak;
self.dataPos++;
} else {
look = NO;
}
}
self.octive = newOctive;
DLog(#"Octive: %d", self.self.octive);
[self playNote];
}
break;
case 84: // T (tempo)
{
bool look = YES;
int newTempo = 0;
while (self.dataPos < self.music.length && look) {
unsigned int peak = (unsigned int)data[self.dataPos];
if (peak >= 48 && peak <= 57)
{
peak -= 48;
newTempo = newTempo * 10 + peak;
self.dataPos++;
} else {
look = NO;
}
}
self.tempo = newTempo;
DLog(#"Tempo: %d", self.self.tempo);
[self playNote];
}
break;
default:
[self playNote];
break;
}
}
- (int)getNoteNumber:(NSString*)note
{
note = [note uppercaseString];
DLog(#"%#", note);
if ([note isEqualToString:#"A"])
return 0;
else if ([note isEqualToString:#"A#"] || [note isEqualToString:#"B-"])
return 1;
else if ([note isEqualToString:#"B"] || [note isEqualToString:#"C-"])
return 2;
else if ([note isEqualToString:#"C"] || [note isEqualToString:#"B#"])
return 3;
else if ([note isEqualToString:#"C#"] || [note isEqualToString:#"D-"])
return 4;
else if ([note isEqualToString:#"D"])
return 5;
else if ([note isEqualToString:#"D#"] || [note isEqualToString:#"E-"])
return 6;
else if ([note isEqualToString:#"E"] || [note isEqualToString:#"F-"])
return 7;
else if ([note isEqualToString:#"F"] || [note isEqualToString:#"E#"])
return 8;
else if ([note isEqualToString:#"F#"] || [note isEqualToString:#"G-"])
return 9;
else if ([note isEqualToString:#"G"])
return 10;
else if ([note isEqualToString:#"G#"])
return 11;
}
- (void)setFreq:(int)note
{
float a = powf(2, self.octive);
float b = powf(1.059463, note);
float freq = roundf((275.0 * a * b) / 10);
self.toneGen.frequency = freq;
}
- (void)toneStop
{
[self playNote];
}
#end
To play little tune create a Music object and play...
[self.music play:#"T180 DF#A L2 A L4 O4 AA P4 F#F# P4 O3 D DF#A L2 A L4 O4 AA P4 GG P4 O3 C#C#EB L2 B L4 O4 BB P4 GG P4 O3 C#C#EB L2 B L4 O4 BB P4 F+F+ P4 O3 DDF#A L2 O4 D L4 O5 DD P4O4 AA P4 O3 DDF#A L2 O4 D L4 O5 DD P4O4 BB P4 EEG L8 B P8 ML B1 L4 MN G#A ML L3 O5 F#1L4 MN D O4 F# ML L2 F# MN L4 E ML L2 B MN L4 AD P8 D8 D4"];
Any idea on how to remove the chirp between notes?
I think that the bit where you stop audio output between notes is the culprit:
if (self.toneUnit)
{
AudioOutputUnitStop(self.toneUnit);
AudioUnitUninitialize(self.toneUnit);
AudioComponentInstanceDispose(self.toneUnit);
self.toneUnit = nil;
}
Just leave the tone unit active and you'll have less chirping. You'll need some other way to generate silence, probably by having RenderTone continue to run but generate amplitude zero.
I was able to eliminate the slight chirps that remained by having it, on a frequency change, fade the amplitude down to nothing, update the frequenmcy, and fade back in again. This is of course what the old PC speaker couldn't do (except for a few people who rapidly switched it on again), but with a very rapid fade you can probably get the old-school effect without the chirps.
Here's my fading RenderTone function (currently using evil global variables):
double currentFrequency=0;
double currentSampleRate=0;
double currentAmplitude=0;
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.5;
// Get the tone parameters out of the view controller
ToneGen *toneGen = (__bridge ToneGen *)inRefCon;
double theta = toneGen.theta;
BOOL fadingOut = NO;
if ((currentFrequency != toneGen.frequency) || (currentSampleRate != toneGen.sampleRate))
{
if (currentAmplitude > DBL_EPSILON)
{
fadingOut = YES;
}
else
{
currentFrequency = toneGen.frequency;
currentSampleRate = toneGen.sampleRate;
}
}
double theta_increment = 2.0 * M_PI * currentFrequency /currentSampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * currentAmplitude;
//NSLog(#"amplitude = %f", currentAmplitude);
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
if (fadingOut)
{
if (currentAmplitude > 0)
{
currentAmplitude -= 0.001;
if (currentAmplitude < 0)
currentAmplitude = 0;
}
}
else
{
if (currentAmplitude < amplitude)
{
currentAmplitude += 0.001;
if (currentAmplitude > amplitude)
currentAmplitude = amplitude;
}
}
}
// Store the theta back in the view controller
toneGen.theta = theta;
return noErr;
}
That little chirp is generally an artifact of mathematics. The ear essentially analyzes input signals in the frequency domain. A steady sine wave at frequency 220 Hz, for example, will sound like an A. However, when your sine wave is *un*steady, there are other frequencies that show up due to the boundary. In particular, you get a bit of a pop due to the very high frequency component of starting or stopping a sound abruptly.
The way I solved this in my synthesizer (in Javascript, not Obj-C, but the concept here is the same) is to fade the sound in over 300 samples or so on note on, and fade the sound out over 300 samples or so on note off. There's no way to truly eliminate boundary effects other than not having a boundary at all, but even a small and imperceptible amount of fade will render the boundary effect imperceptible as well.

error in audio Unit code -remoteIO for iphone

i have this code, in order to read buffer samples , but i get a strange mach-o linker error ,
Framework of audio unit couldnt loaded so i put the audioTollBox and coreAudio as i read.
my code is :
#define kOutputBus 0
#define kInputBus 1
AudioComponentInstance audioUnit;
#implementation remoteIO
//callback function :
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
NSLog(#"%ld",inNumberFrames);
buffer.mData = malloc( inNumberFrames * 2 );
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status); //here is the warnning+error
double *q = (double *)(&bufferList)->mBuffers[0].mData;
for(int i=0; i < strlen((const char *)(&bufferList)->mBuffers[0].mData); i++)
{
NSLog(#"%f",q[i]);
}
}
and the reading method :
-(void)startListeningWithFrequency:(float)freq;
{
OSStatus status;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew( inputComponent, &audioUnit);
checkStatus(status);
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,kInputBus, &flag, sizeof(flag));
checkStatus(status);
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;//44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
checkStatus(status);
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus, &callbackStruct, sizeof(callbackStruct));
checkStatus(status);
status = AudioOutputUnitStart(audioUnit);
}
and what i get is this error and warnning :
Undefined symbols for architecture i386:
"_checkStatus", referenced from:
_recordingCallback in remoteIO.o
-[remoteIO startListeningWithFrequency:] in remoteIO.o
ld: symbol(s) not found for architecture i386
collect2: ld returned 1 exit status
whats wrong here, ?
thanks.
You have to write your own checkStatus() function, as what it does (e.g. how it reports an error: dialog box, console output, analytics logging, crash dump, etc.), or whether is does anything at all other than return from the audio code, is specific to each app.