How can I modify the SpeakHere sample app to record in mono format on iPhone? - objective-c

I am new to iPhone. Could you please help me to modify the SpeakHere app from Apple to record in mono format. What should I have to set for mChannelsPerFrame and what else should I set?
I already change some part for record on linearPCM WAVE format.
Here is link to speakHere.
Here is what I think they allow me to change but I don't quite understand on sound:
void ChangeNumberChannels(UInt32 nChannels, bool interleaved)
// alter an existing format
{
Assert(IsPCM(), "ChangeNumberChannels only works for PCM formats");
UInt32 wordSize = SampleWordSize(); // get this before changing ANYTHING
if (wordSize == 0)
wordSize = (mBitsPerChannel + 7) / 8;
mChannelsPerFrame = nChannels;
mFramesPerPacket = 1;
if (interleaved) {
mBytesPerPacket = mBytesPerFrame = nChannels * wordSize;
mFormatFlags &= ~kAudioFormatFlagIsNonInterleaved;
} else {
mBytesPerPacket = mBytesPerFrame = wordSize;
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
}

On iPhone you will only be able to record in mono.
You shouldn't need to do anything to set this up in the SpeakHere example. It's done automatically. For example in AQRecorder::SetupAudioFormat:
size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
That gets the supported hardware input channels and sets it as an ivar. Elsewhere, the buffer size calculations will factor that in.

Related

NAudio: Outputting an audio stream at high sample rate results in dropped and choppy sound

I am reading samples from a software defined radio device that delivers two channels of Int16 (shorts) that represent audio. I am trying to send this stream to an audio output device on a PC.
I already have this working with a different set of tools but would like to use NAudio in its place since there are other capabilities I could use.
The code starts with the software defined radio method placing an array of shorts (i.e. int16) arranged as left channel then right channel. The array size is 65536, the sample rate is 192,000.
When the packet is received it is place on a Blocking collection to be picked up by a thread that sends out the audio.
So for the purpose of this question, the thread starts by reading the blocking collection which returns 65536 shorts.
StartStream = true;
waveformat = new WaveFormat(192000, 16, 2);
bs = new BufferedWaveProvider(waveformat);
bs.BufferLength = 65536 * 4;
_waveOut.DeviceNumber = 4;
while (true)
{
dspPacket = dsp_Queue.Take(); //take int16[] of queue
int j = 0;
try
{
for (int i = 0; i < 32768; i += 2)
{
byte[] qDataByte = BitConverter.GetBytes(dspShort[i]);
dspBytes[j] = qDataByte[0];
dspBytes[j + 1] = qDataByte[1];
j += 2;
}
if (startStream)
{
bs.AddSamples(dspBytes, 0, 65536);
_waveOut.Init(bs);
_waveOut.Play();
startStream = false;
}
else
{
bs.AddSamples(dspBytes, 0, 65536);
}
}
catch (Exception ex)
{
}
// SendData(dspPacket); //if this is uncommented, everything works correctly with the old method.
}
Now I can feed the output of the waveout device to a virtual audio cable which then feeds it into some spectrum analyser software and then this is what you will see with here:
When I use the old routines, I get a solid line and the sound does not break up.
So, am I doing something wrong here?
Thanks, Tom

random corrupted data in serial Arduino

I have issue to read serial data from Arduino Uno in UWP C#. Sometimes when I start app I get corrupted data.
But in arduino monitor always is good.
Corrupted data is like this:
57
5
But should be 557.
Arduino codes:
String digx, digx2, digx3, digx1 ;
void setup() {
delay(500);
Serial.begin(9600);
}
void loop() {
int sensorValue = analogRead(A1);
sensorValue = map(sensorValue, 0, 1024, 0, 999);
digx1 = String(sensorValue % 10);
digx2 = String((sensorValue / 10) % 10);
digx3 = String((sensorValue / 100) % 10);
digx = (digx3 + digx2 + digx1);
Serial.println(digx);
Serial.flush();
delay(100);
}
And windows universal codes:
public sealed partial class MainPage : Page
{
public DataReader dataReader;
public SerialDevice serialPort;
private string data;
private async void InitializeConnection()
{
var aqs = SerialDevice.GetDeviceSelectorFromUsbVidPid(0x2341, 0x0043);
var info = await DeviceInformation.FindAllAsync(aqs);
serialPort = await SerialDevice.FromIdAsync(info[0].Id);
Thread.Sleep(100);
serialPort.DataBits = 8;
serialPort.BaudRate = 9600;
serialPort.Parity = SerialParity.None;
serialPort.Handshake = SerialHandshake.None;
serialPort.StopBits = SerialStopBitCount.One;
serialPort.ReadTimeout = TimeSpan.FromMilliseconds(1000);
serialPort.WriteTimeout = TimeSpan.FromMilliseconds(1000);
Thread.Sleep(100);
dataReader = new DataReader(serialPort.InputStream);
while (true)
{
await ReadAsync();
}
}
private async Task ReadAsync()
{
try
{
uint readBufferLength = 5;
dataReader.InputStreamOptions = InputStreamOptions.Partial;
//dataReader.UnicodeEncoding = UnicodeEncoding.Utf8;
var loadAsyncTask = dataReader.LoadAsync(readBufferLength).AsTask();
var ReadAsyncBytes = await loadAsyncTask;
if (ReadAsyncBytes > 0)
{
data = dataReader.ReadString(ReadAsyncBytes);
serialValue.Text = data;
}
}
catch (Exception e)
{
}
}
}
My idea was if it happened, Program skip that data. I tried different buffer length but more or less than 5 will always receive incorrect.
Thanks for any help.
Two methods that can fix your problem.
Method1: Make your UART communication robust
That there is no chance of an error but this is a tricky and time-consuming task in the case of windows software communicating with Arduino.
But still have some tricks which can help you because I have tried them and some of them work but please don't ask for the reason why it works.
Change your UART baud rate from 9600 to something else (mine if works perfect at 38400).
Replace Serial.begin(9600); to Serial.begin(38400,SERIAL_8N1)where 8 is data bit, 1 is stop bit and N is parity none in SERIAL_8N1. And you can also try different settings in data bit, stop bit, and parity. check here
don't send data after converting it into a string Serial.println(digx);. Send data in form of bytes array use Serial.write(digx,len);. check here
Method2: Add CRC Packet
At the end of each data packet and then send. When it reaches the windows software first fetch all UART data then calculate CRC from that data and then just compare calculated CRC and received CRC.If CRC matches then the data is correct and if not then ignore data and wait for new data.
you can check for CRC calculation its direct formula base implement and for little head start check this library by vinmenn.
I used two things to fix the problem,
1. In Windows codes added SerialDevice.IsDataTerminalReadyEnabled = true property to read bytes with correct order. From Microsoft docs
Here's a link!
2. As dharmik helped, I used array and send it with Serial.write(array, arrayLength) in Arduino and read them with dataReader.ReadBytes(array[]) and right buffer size in Windows app.
And result is so stable even with much lower Delays.

Raw Input mouse lastx, lasty with odd values while logged in through RDP

When I attempt to update my mouse position from the lLastX, and lLastY members of the RAWMOUSE structure while I'm logged in via RDP, I get some really odd numbers (like > 30,000 for both). I've noticed this behavior on Windows 7, 8, 8.1 and 10.
The usFlags member returns a value of MOUSE_MOVE_ABSOLUTE | MOUSE_VIRTUAL_DESKTOP. Regarding the MOUSE_MOVE_ABSOLUTE, I am handling absolute positioning as well as relative in my code. However, the virtual desktop flag has me a bit confused as I assumed that flag was for a multi-monitor setup. I've got a feeling that there's a connection to that flag and the weird numbers I'm getting. Unfortunately, I really don't know how to adjust the values without a point of reference, nor do I even know how to get a point of reference.
When I run my code locally, everything works as it should.
So does anyone have any idea why RDP + Raw Input would give me such messed up mouse lastx/lasty values? And if so, is there a way I can convert them to more sensible values?
It appears that when using WM_INPUT through remote desktop, the MOUSE_MOVE_ABSOLUTE and MOUSE_VIRTUAL_DESKTOP bits are set, and the values seems to be ranging from 0 to USHRT_MAX.
I never really found a clear documentation stating which coordinate system is used when MOUSE_VIRTUAL_DESKTOP bit is set, but this seems to have worked well thus far:
case WM_INPUT: {
UINT buffer_size = 48;
LPBYTE buffer[48];
GetRawInputData((HRAWINPUT)lparam, RID_INPUT, buffer, &buffer_size, sizeof(RAWINPUTHEADER));
RAWINPUT* raw = (RAWINPUT*)buffer;
if (raw->header.dwType != RIM_TYPEMOUSE) {
break;
}
const RAWMOUSE& mouse = raw->data.mouse;
if ((mouse.usFlags & MOUSE_MOVE_ABSOLUTE) == MOUSE_MOVE_ABSOLUTE) {
static Vector3 last_pos = vector3(FLT_MAX, FLT_MAX, FLT_MAX);
const bool virtual_desktop = (mouse.usFlags & MOUSE_VIRTUAL_DESKTOP) == MOUSE_VIRTUAL_DESKTOP;
const int width = GetSystemMetrics(virtual_desktop ? SM_CXVIRTUALSCREEN : SM_CXSCREEN);
const int height = GetSystemMetrics(virtual_desktop ? SM_CYVIRTUALSCREEN : SM_CYSCREEN);
const Vector3 absolute_pos = vector3((mouse.lLastX / float(USHRT_MAX)) * width, (mouse.lLastY / float(USHRT_MAX)) * height, 0);
if (last_pos != vector3(FLT_MAX, FLT_MAX, FLT_MAX)) {
MouseMoveEvent(absolute_pos - last_pos);
}
last_pos = absolute_pos;
}
else {
MouseMoveEvent(vector3((float)mouse.lLastX, (float)mouse.lLastY, 0));
}
}
break;

How to get array of float audio data from AudioQueueRef in iOS?

I'm working on getting audio into the iPhone in a form where I can pass it to a (C++) analysis algorithm. There are, of course, many options: the AudioQueue tutorial at trailsinthesand gets things started.
The audio callback, though, gives an AudioQueueRef, and I'm finding Apple's documentation thin on this side of things. Built-in methods to write to a file, but nothing where you actually peer inside the packets to see the data.
I need data. I don't want to write anything to a file, which is what all the tutorials — and even Apple's convenience I/O objects — seem to be aiming at. Apple's AVAudioRecorder (infuriatingly) will give you levels and write the data, but not actually give you access to it. Unless I'm missing something...
How to do this? In the code below there is inBuffer->mAudioData which is tantalizingly close but I can find no information about what format this 'data' is in or how to access it.
AudioQueue Callback:
void AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
static int count = 0;
RecordState* recordState = (RecordState*)inUserData;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
++count;
printf("Got buffer %d\n", count);
}
And the code to write the audio to a file:
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData); // THIS! This is what I want to look inside of.
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
// so you don't have to hunt them all down when you decide to switch to float:
#define AUDIO_DATA_TYPE_FORMAT SInt16
// the actual sample-grabbing code:
int sampleCount = inBuffer->mAudioDataBytesCapacity / sizeof(AUDIO_DATA_TYPE_FORMAT);
AUDIO_DATA_TYPE_FORMAT *samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData;
Then you have your (in this case SInt16) array samples which you can access from samples[0] to samples[sampleCount-1].
The above solution did not work for me, I was getting the wrong sample data itself.(an endian issue) If incase someone is getting wrong sample data in future, I hope this helps you :
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}

How to define end in objective C

OSStatus SetupBuffers(BG_FileInfo *inFileInfo)
{
int numBuffersToQueue = kNumberBuffers;
UInt32 maxPacketSize;
UInt32 size = sizeof(maxPacketSize);
// we need to calculate how many packets we read at a time, and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
OSStatus result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize);
AssertNoError("Error getting packet upper bound size", end);
bool isFormatVBR = (inFileInfo->mFileFormat.mBytesPerPacket == 0 || inFileInfo- >mFileFormat.mFramesPerPacket == 0);
CalculateBytesForTime(inFileInfo->mFileFormat, maxPacketSize, 0.5/*seconds*/, &mBufferByteSize, &mNumPacketsToRead);
// if the file is smaller than the capacity of all the buffer queues, always load it at once
if ((mBufferByteSize * numBuffersToQueue) > inFileInfo->mFileDataSize)
inFileInfo->mLoadAtOnce = true;
if (inFileInfo->mLoadAtOnce)
{
UInt64 theFileNumPackets;
size = sizeof(UInt64);
result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyAudioDataPacketCount, &size, &theFileNumPackets);
AssertNoError("Error getting packet count for file", end);***>>>>this is where xcode says undefined<<<<***
mNumPacketsToRead = (UInt32)theFileNumPackets;
mBufferByteSize = inFileInfo->mFileDataSize;
numBuffersToQueue = 1;
}
//Here is the exact error
label 'end' used but not defined
I have that error twice
If you look at the SoundEngine.cpp source that the snippet comes from, you'll see it's defined on the very next line:
end:
return result;
It's a label that execution jumps to when there's an error.
Uhm, the only place I can find AssertNoError is here in Technical Note TN2113. And it has a completely different format. AssertNoError(theError, "couldn't unregister the ABL"); Where is AssertNoError defined?
User #Jeremy P mentions this document as well.