I'm trying to create an H.264 Compression Session with the data from my screen. I've created a CGDisplayStreamRef instance like so:
displayStream = CGDisplayStreamCreateWithDispatchQueue(0, 100, 100, k32BGRAPixelFormat, nil, self.screenCaptureQueue, ^(CGDisplayStreamFrameStatus status, uint64_t displayTime, IOSurfaceRef frameSurface, CGDisplayStreamUpdateRef updateRef) {
//Call encoding session here
});
Below is how I currently have the encoding function setup:
- (void) encode:(CMSampleBufferRef )sampleBuffer {
CVImageBufferRef imageBuffer = (CVImageBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime presentationTimeStamp = CMTimeMake(frameID++, 1000);
VTEncodeInfoFlags flags;
OSStatus statusCode = VTCompressionSessionEncodeFrame(EncodingSession,
imageBuffer,
presentationTimeStamp,
kCMTimeInvalid,
NULL, NULL, &flags);
if (statusCode != noErr) {
NSLog(#"H264: VTCompressionSessionEncodeFrame failed with %d", (int)statusCode);
VTCompressionSessionInvalidate(EncodingSession);
CFRelease(EncodingSession);
EncodingSession = NULL;
return;
}
NSLog(#"H264: VTCompressionSessionEncodeFrame Success");
}
I'm trying to understand how I can convert the data from my screen into a CMSampleBufferRef so I can properly call my encode function. So far, I've not been able to determine if this is possible, or the right approach for what I'm trying to do. Does anyone have any suggestions?
EDIT: I've gotten my IOSurfaceconverted to a CMBlockBuffer, but haven't yet figured out how to convert that to a CMSampleBufferRef:
void *mem = IOSurfaceGetBaseAddress(frameSurface);
size_t bytesPerRow = IOSurfaceGetBytesPerRow(frameSurface);
size_t height = IOSurfaceGetHeight(frameSurface);
size_t totalBytes = bytesPerRow * height;
CMBlockBufferRef blockBuffer;
CMBlockBufferCreateWithMemoryBlock(kCFAllocatorNull, mem, totalBytes, kCFAllocatorNull, NULL, 0, totalBytes, 0, &blockBuffer);
EDIT 2
Some more progress:
CMSampleBufferRef *sampleBuffer;
OSStatus sampleStatus = CMSampleBufferCreate(
NULL, blockBuffer, TRUE, NULL, NULL,
NULL, 1, 1, NULL,
0, NULL, sampleBuffer);
[self encode:*sampleBuffer];
Possibly, I'm a bit late but nevertheless, it could be helpful for others:
CGDisplayStreamCreateWithDispatchQueue(CGMainDisplayID(), 100, 100, k32BGRAPixelFormat, nil, self.screenCaptureQueue, ^(CGDisplayStreamFrameStatus status, uint64_t displayTime, IOSurfaceRef frameSurface, CGDisplayStreamUpdateRef updateRef) {
// The created pixel buffer retains the surface object.
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithIOSurface(NULL, frameSurface, NULL, &pixelBuffer);
// Create the video-type-specific description for the pixel buffer.
CMVideoFormatDescriptionRef videoFormatDescription;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoFormatDescription);
// All the necessary parts for creating a `CMSampleBuffer` are ready.
CMSampleBufferRef sampleBuffer;
CMSampleTimingInfo timingInfo;
CMSampleBufferCreateReadyWithImageBuffer(NULL, pixelBuffer, videoFormatDescription, &timingInfo, &sampleBuffer);
// Do the stuff
// Release the resources to let the frame surface be reused in the queue
// `kCGDisplayStreamQueueDepth` is responsible for the size of the queue
CFRelease(sampleBuffer);
CFRelease(pixelBuffer);
});
Related
I am using FFmpeg to access an RTSP stream in my macOS app.
REACHED GOALS: I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.
GOAL: The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.
Audio data lifecyle:
AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
if (!audioContext) { return; }
// Getting audio settings from FFmpegs audio context (AVCodecContext).
int samplesPerChannel = audioContext->frame_size;
int frameNumber = audioContext->frame_number;
int sampleRate = audioContext->sample_rate;
int fps = [self.rtspStreamProvider fps];
int calculatedSampleRate = sampleRate / fps;
// NSLog(#"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);
// Decoding the audio data from a encoded AVPacket into a AVFrame.
AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
if (!audioFrame) { return; }
// Extracting my audio buffers from FFmpegs AVFrame.
uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
uint8_t* rightChannelAudioBufRef = audioFrame->data[1];
// Creating the CMSampleBuffer with audio data.
CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
if (!leftSampleBuffer) { return; }
if (!self.audioQueue) { return; }
if (!self.audioDelegates) { return; }
// All audio consumers will receive audio samples via delegation.
dispatch_sync(self.audioQueue, ^{
NSHashTable *audioDelegates = self.audioDelegates;
for (id<AudioDataProviderDelegate> audioDelegate in audioDelegates)
{
[audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
// [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
}
});
CMSampleBuffer containing audio data creation:
import Foundation
import CoreMedia
#objc class CMSampleBufferFactory: NSObject
{
#objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<UInt8> ,
channelCount: UInt32,
framesCount: CMItemCount,
sampleRate: Double) -> CMSampleBuffer? {
/* Prepare for sample Buffer creation */
var sampleBuffer: CMSampleBuffer! = nil
var osStatus: OSStatus = -1
var audioFormatDescription: CMFormatDescription! = nil
var absd: AudioStreamBasicDescription! = nil
let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))
// NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
let bytesPerFrame: UInt32 = UInt32(MemoryLayout<Float32>.size) * channelCount
let memoryBlockByteLength = framesCount * Int(bytesPerFrame)
// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo
/* Sample Buffer Block buffer creation */
var blockBuffer: CMBlockBuffer?
osStatus = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: nil,
blockLength: memoryBlockByteLength,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: memoryBlockByteLength,
flags: 0,
blockBufferOut: &blockBuffer
)
assert(osStatus == kCMBlockBufferNoErr)
guard let eBlock = blockBuffer else { return nil }
osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
assert(osStatus == kCMBlockBufferNoErr)
TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
audioData: data,
frames: Int32(framesCount))
/* Audio description creations */
absd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: channelCount,
mBitsPerChannel: 32,
mReserved: 0
)
guard absd != nil else {
print("\nCreating AudioStreamBasicDescription Failed.")
return nil
}
osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
asbd: &absd,
layoutSize: 0,
layout: nil,
// layoutSize: MemoryLayout<AudioChannelLayout>.size,
// layout: &acl,
magicCookieSize: 0,
magicCookie: nil,
extensions: nil,
formatDescriptionOut: &audioFormatDescription)
guard osStatus == noErr else {
print("\nCreating CMFormatDescription Failed.")
return nil
}
/* Create sample Buffer */
var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)
osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
dataBuffer: eBlock,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: audioFormatDescription,
sampleCount: framesCount,
sampleTimingEntryCount: 1,
sampleTimingArray: &timmingInfo,
sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
sampleBufferOut: &sampleBuffer)
return sampleBuffer
}
}
CMSampleBuffer gets filled with raw audio data from FFmpeg's data:
#import Foundation;
#import CoreMedia;
#interface BlockBufferHelper : NSObject
+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
audioData:(uint8_t *)data
frames:(int)framesCount;
#end
#import "TVBlockBufferHelper.h"
#implementation BlockBufferHelper
+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
audioData:(uint8_t *)data
frames:(int)framesCount
{
// Possibly dev error.
if (framesCount == 0) {
NSAssert(false, #"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
return;
}
char *rawBuffer = NULL;
size_t size = 0;
OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
if(status != noErr)
{
return;
}
memcpy(rawBuffer, data, framesCount);
}
#end
The LEARNING Core Audio book from Chris Adamson/Kevin Avila points me toward a multi channel mixer.
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into a CMSampleBuffer for further consumption.
This direction should lead me to AudioUnits, AUGraph and the AudioToolbox. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them use AudioToolBox classes and don't use CMSampleBuffers as much as I need.
Is there another way to merge audio buffers into a new one?
Is creating a multi channel mixer using AudioToolBox the right direction?
I'm programming a game framework based on DirectX11 but I'm getting a problem, my textures are badly shown on screen, this is a screenshot:
As you can see the image is not perfect but I've noticed that this is happening only if I initialize the swap-chain to windowed, if I don't and I initialize it to full screen the sprite is shown correctly, even if during runtimes I swap from full screen to windowed it still shown correctly, this is the image shown on screen:
There is the initialization of my swapchain:
RECT dimensions;
GetClientRect(game->Window, &dimensions);
unsigned int width = dimensions.right - dimensions.left;
unsigned int height = dimensions.bottom - dimensions.top;
D3D_DRIVER_TYPE driverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
D3D_DRIVER_TYPE_SOFTWARE
};
unsigned int totalDriverTypes = ARRAYSIZE(driverTypes);
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0
};
unsigned int totalFeatureLevels = ARRAYSIZE(featureLevels);
DXGI_SWAP_CHAIN_DESC swapChainDesc;
ZeroMemory(&swapChainDesc, sizeof(swapChainDesc));
swapChainDesc.BufferCount = DXGI_SWAP_EFFECT_SEQUENTIAL;
swapChainDesc.BufferDesc.Width = width;
swapChainDesc.BufferDesc.Height = height;
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.OutputWindow = game->Window;
swapChainDesc.Windowed = true;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.SampleDesc.Quality = 0;
unsigned int creationFlags = 0;
#ifdef _DEBUG
creationFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
HRESULT result;
unsigned int driver = 0;
pin_ptr<IDXGISwapChain*> swapChainPointer;
swapChainPointer = &swapChain_;
pin_ptr<ID3D11Device*> d3dDevicePointer;
d3dDevicePointer = &d3dDevice_;
pin_ptr<D3D_FEATURE_LEVEL> featureLevelPointer;
featureLevelPointer = &featureLevel_;
pin_ptr<ID3D11DeviceContext*> d3dContextPointer;
d3dContextPointer = &d3dContext_;
for (driver = 0; driver < totalDriverTypes; ++driver)
{
result = D3D11CreateDeviceAndSwapChain(0, driverTypes[driver], 0, creationFlags, featureLevels, totalFeatureLevels,
D3D11_SDK_VERSION, &swapChainDesc, swapChainPointer,
d3dDevicePointer, featureLevelPointer, d3dContextPointer);
if (SUCCEEDED(result))
{
driverType_ = driverTypes[driver];
break;
}
}
And this is the code to toggle the full screen:
swapChain_->SetFullscreenState(isFullScreen, NULL);
Where IsFullScreen is a boolean passed to the containig function.
Can anyone help me? Thanks in advance!
EDIT:
Solved:
I've change WS_OVERLAPPED parameter on my window creation:
RECT rc = { 0, 0, WindowWidth, WindowHeight };
AdjustWindowRect(&rc, WS_OVERLAPPED | WS_MINIMIZEBOX | WS_SYSMENU, FALSE);
LPCTSTR title = Utilities::StringToLPCSTR(Title);
HWND hwnd = CreateWindowA("BSGame", title, WS_OVERLAPPED | WS_MINIMIZEBOX | WS_SYSMENU, CW_USEDEFAULT, CW_USEDEFAULT, rc.right - rc.left, rc.bottom - rc.top, NULL, NULL, windowHandler, NULL);
To WS_OVERLAPPEDWINDOW
When you call SetFullScreen on you SwapChain, you only request a mode change.
Normally after the call, you should receive a WM_SIZE message from your main form, you then need to do the following:
Release any associated resource bound to your swapchain (eg: Texture and RenderTargetView)
Make sure you also call ClearState on device context, since if you SwapChain is still bound to pipeline you will also have issue (runtime will not destroy a resource if it's bound to pipeline).
next you call resize on swapchain, like:
HRESULT result = mSwapChain->ResizeBuffers(0,0,0,DXGI_FORMAT_UNKNOWN,0);
Finally, you can query texture again:
ID3D11Texture2D* texture;
mSwapChain->GetBuffer(0,__uuidof(ID3D11Texture2D),(void**)(&texture));
//Get Buffer does an AddRef on top so we release
texture->Release();
And create a new RenderTargetView, plus update your ViewPort to the new size (get Description from texture).
In my case the problem was during window creation, this were my situation:
RECT rc = { 0, 0, WindowWidth, WindowHeight };
AdjustWindowRect(&rc, WS_OVERLAPPED | WS_MINIMIZEBOX | WS_SYSMENU, FALSE);
LPCTSTR title = Utilities::StringToLPCSTR(Title);
HWND hwnd = CreateWindowA("BSGame", title, WS_OVERLAPPED | WS_MINIMIZEBOX | WS_SYSMENU, CW_USEDEFAULT, CW_USEDEFAULT, rc.right - rc.left, rc.bottom - rc.top, NULL, NULL, windowHandler, NULL);
To solve I've changed the WS_OVERLAPPED to WS_OVERLAPPEDWINDOW
How do I access the current volume level of a Mac from the Cocoa API?
For example: when I'm using Spotify.app on OS X 10.7 and a sound advertisement comes up, and I turn down my Mac's volume, the app will pause the ad until I turn it back up to an average level. I find this incredibly obnoxious and a violation of user privacy, but somehow Spotify has found a way to do this.
Is there any way I can do this with Cocoa? I'm making an app where it might come in useful to warn the user if the volume is low.
There are two options available. The first step is to determine what device you'd like and get its ID. Assuming the default output device, the code will look something like:
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
AudioDeviceID deviceID;
UInt32 dataSize = sizeof(deviceID);
OSStatus result = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, &deviceID);
if(kAudioHardwareNoError != result)
// Handle the error
Next, you can use the kAudioHardwareServiceDeviceProperty_VirtualMasterVolume property to get the device's virtual master volume:
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
if(!AudioHardwareServiceHasProperty(deviceID, &propertyAddress))
// An error occurred
Float32 volume;
UInt32 dataSize = sizeof(volume);
OSStatus result = AudioHardwareServiceGetPropertyData(deviceID, &propertyAddress, 0, NULL, &dataSize, &volume);
if(kAudioHardwareNoError != result)
// An error occurred
Alternatively, you can use kAudioDevicePropertyVolumeScalar to get the volume for a specific channel:
UInt32 channel = 1; // Channel 0 is master, if available
AudioObjectPropertyAddress propertyAddress = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
channel
};
if(!AudioObjectHasProperty(deviceID, &propertyAddress))
// An error occurred
Float32 volume;
UInt32 dataSize = sizeof(volume);
OSStatus result = AudioObjectGetPropertyData(deviceID, &propertyAddress, 0, NULL, &dataSize, &volume);
if(kAudioHardwareNoError != result)
// An error occurred
The difference between the two is explained in Apple's docs:
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume
A Float32 value that represents the value of the volume control. The
range for this property’s value is 0.0 (silence) through 1.0 (full
level). The effect of this property depends on the hardware device
associated with the HAL audio object. If the device has a master
volume control, this property controls it. If the device has
individual channel volume controls, this property applies to those
identified by the device's preferred multichannel layout, or the
preferred stereo pair if the device is stereo only. This control
maintains relative balance between the channels it affects.
So it can be tricky to define exactly what a device's volume is, especially for multichannel devices with non-standard channel maps.
From CocoaDev, these class methods look like they should work, though it's not particularly Cocoa-like:
#import <AudioToolbox/AudioServices.h>
+(AudioDeviceID)defaultOutputDeviceID
{
AudioDeviceID outputDeviceID = kAudioObjectUnknown;
// get output device device
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultOutputDevice;
if (!AudioHardwareServiceHasProperty(kAudioObjectSystemObject, &propertyAOPA))
{
NSLog(#"Cannot find default output device!");
return outputDeviceID;
}
propertySize = sizeof(AudioDeviceID);
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, &propertySize, &outputDeviceID);
if(status)
{
NSLog(#"Cannot find default output device!");
}
return outputDeviceID;
}
// getting system volume
+(float)volume
{
Float32 outputVolume;
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mSelector = kAudioHardwareServiceDeviceProperty_VirtualMasterVolume;
propertyAOPA.mScope = kAudioDevicePropertyScopeOutput;
AudioDeviceID outputDeviceID = [[self class] defaultOutputDeviceID];
if (outputDeviceID == kAudioObjectUnknown)
{
NSLog(#"Unknown device");
return 0.0;
}
if (!AudioHardwareServiceHasProperty(outputDeviceID, &propertyAOPA))
{
NSLog(#"No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
propertySize = sizeof(Float32);
status = AudioHardwareServiceGetPropertyData(outputDeviceID, &propertyAOPA, 0, NULL, &propertySize, &outputVolume);
if (status)
{
NSLog(#"No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
if (outputVolume < 0.0 || outputVolume > 1.0) return 0.0;
return outputVolume;
}
Client (iOS) sends a message and the server checks it and answers.
iOS displays the answer.
There are two types of answer. One is just an answer. Server sends only one time.
The other is a little different. Server sends 20 times.
When server sends one time, I can process well. It's not difficult.
The problem is with the second type:
I tried two way of getting the data.
First, I used a simple CFSocket example with some modification. When it gets a message, it works well. When it gets 20 messages, it stops with an error. It says "Program received signal 'SIGABRT' "
//main code
CFSocketRef ref = CFSocketCreate(kCFAllocatorDefault, PF_INET, SOCK_DGRAM, 0, kCFSocketReadCallBack|kCFSocketDataCallBack|kCFSocketConnectCallBack|kCFSocketWriteCallBack, CFSockCallBack, NULL);
struct sockaddr_in theName;
struct hostent *hp;
theName.sin_port = htons(5003);
theName.sin_family = AF_INET;
hp = gethostbyname(IPADDRESS);
if( hp == NULL ) {
return;
}
memcpy( &theName.sin_addr.s_addr, hp->h_addr_list[PORT_NUM], hp->h_length );
CFDataRef addressData = CFDataCreate( NULL, &theName, sizeof( struct sockaddr_in ) );
CFSocketConnectToAddress(ref, addressData, 30);
CFRunLoopSourceRef FrameRunLoopSource = CFSocketCreateRunLoopSource(NULL, ref , 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(), FrameRunLoopSource, kCFRunLoopCommonModes);
//Callback Method
void CFSockCallBack (
CFSocketRef s,
CFSocketCallBackType callbackType,
CFDataRef address,
const void *data,
void *info
) {
NSLog(#"callback!");
if(callbackType == kCFSocketDataCallBack) {
[lock lock];
NSLog(#"has data");
UInt8 * d = CFDataGetBytePtr((CFDataRef)data);
int len = CFDataGetLength((CFDataRef)data);
for(int i=0; i < len; i++) {
// NSLog(#"%c",*(d+i));
}
//Data processing area=
[lock unlock];
}
if(callbackType == kCFSocketReadCallBack) {
NSLog(#"to read");
char buf[100] = {0};
int sock = CFSocketGetNative(s);
NSLog(#"to read");
NSLog(#"read:%d",recv(sock, &buf, 100, 0));
NSLog(#"%s",buf);
}
if(callbackType == kCFSocketWriteCallBack) {
NSLog(#"to write");
char sendbuf[100]={0x53, 0x4D, 0x49, 0x43, 0x2};
//strcpy(sendbuf,"GET / HTTP/1.0\r\n\r\n");
//NSLog(#"%c",0x53);
CFDataRef dt = CFDataCreate(NULL, sendbuf, 5);
CFSocketSendData(s, NULL, dt, strlen(sendbuf));
}
if(callbackType == kCFSocketConnectCallBack) {
NSLog(#"connected");
}
}
The second way is using CocoaAsyncSocket. It seems to work well. But sometimes it stops with an error.
I used GCDAsyncUDPSocket. It stops randomly with signal "EXC_BAD_ACCESS".
I did it well on Android... So I think there is some other way possible.
I've known that Airport can be turned off by CoreWLAN framework.
So, I think there are probably functions or frameworks related with bluetooth device and sound device.
How can I turn off that devices?
I assume you by "cannot have power so that it cannot speak", you mean you simply want to mute the speaker. I found some neat sample code here, using CoreAudio to mute the system's default speaker: http://cocoadev.com/index.pl?SoundVolume
I took the liberty of converting it to pure C and trying it out.
#import <CoreAudio/CoreAudio.h>
#import <stdio.h>
// getting system volume
float getVolume() {
float b_vol;
OSStatus err;
AudioDeviceID device;
UInt32 size;
UInt32 channels[2];
float volume[2];
// get device
size = sizeof device;
err = AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice, &size, &device);
if(err!=noErr) {
printf("audio-volume error get device\n");
return 0.0;
}
// try set master volume (channel 0)
size = sizeof b_vol;
err = AudioDeviceGetProperty(device, 0, 0, kAudioDevicePropertyVolumeScalar, &size, &b_vol); //kAudioDevicePropertyVolumeScalarToDecibels
if(noErr==err) return b_vol;
// otherwise, try seperate channels
// get channel numbers
size = sizeof(channels);
err = AudioDeviceGetProperty(device, 0, 0,kAudioDevicePropertyPreferredChannelsForStereo, &size,&channels);
if(err!=noErr) printf("error getting channel-numbers\n");
size = sizeof(float);
err = AudioDeviceGetProperty(device, channels[0], 0, kAudioDevicePropertyVolumeScalar, &size, &volume[0]);
if(noErr!=err) printf("error getting volume of channel %d\n",channels[0]);
err = AudioDeviceGetProperty(device, channels[1], 0, kAudioDevicePropertyVolumeScalar, &size, &volume[1]);
if(noErr!=err) printf("error getting volume of channel %d\n",channels[1]);
b_vol = (volume[0]+volume[1])/2.00;
return b_vol;
}
// setting system volume
void setVolume(float involume) {
OSStatus err;
AudioDeviceID device;
UInt32 size;
Boolean canset = false;
UInt32 channels[2];
//float volume[2];
// get default device
size = sizeof device;
err = AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice, &size, &device);
if(err!=noErr) {
printf("audio-volume error get device\n");
return;
}
// try set master-channel (0) volume
size = sizeof canset;
err = AudioDeviceGetPropertyInfo(device, 0, false, kAudioDevicePropertyVolumeScalar, &size, &canset);
if(err==noErr && canset==true) {
size = sizeof involume;
err = AudioDeviceSetProperty(device, NULL, 0, false, kAudioDevicePropertyVolumeScalar, size, &involume);
return;
}
// else, try seperate channes
// get channels
size = sizeof(channels);
err = AudioDeviceGetProperty(device, 0, false, kAudioDevicePropertyPreferredChannelsForStereo, &size,&channels);
if(err!=noErr) {
printf("error getting channel-numbers\n");
return;
}
// set volume
size = sizeof(float);
err = AudioDeviceSetProperty(device, 0, channels[0], false, kAudioDevicePropertyVolumeScalar, size, &involume);
if(noErr!=err) printf("error setting volume of channel %d\n",channels[0]);
err = AudioDeviceSetProperty(device, 0, channels[1], false, kAudioDevicePropertyVolumeScalar, size, &involume);
if(noErr!=err) printf("error setting volume of channel %d\n",channels[1]);
}
int main() {
printf("The system's volume is currently %f\n", getVolume());
printf("Setting volume to 0.\n");
setVolume(0.0f);
return 0;
}
I ran it and got this:
[04:29:03] [william#enterprise ~/Documents/Programming/c]$ gcc -framework CoreAudio -o mute.o coreaudio.c
.. snipped compiler output..
[04:29:26] [william#enterprise ~/Documents/Programming/c]$ ./mute.o
The system's volume is currently 0.436749
Setting volume to 0.
Hopefully this sends you in the right direction.