Problems connecting to the input pins of GMFBridge Sink Filter - c++-cli

I am experiencing a strange problem when trying to use the GMFBridge filter with the output of an Euresys UxH264 card.
I am trying to integrate this card into our solution, that relies on GMFBridge to handle the ability of continuous capture to multiple files, performing muxing and file-splitting without having to stop the capture graph.
This card captures video and audio from analog inputs. It provides a DirectShow filter exposing both a raw stream of the video input and a hardware-encoded H.264 stream. The audio stream is provided as an uncompressed stream only.
When I attempt to directly connect any of the output pins of the Euresys source filters to the input pins of the GMFBridge Sink, they get rejected, with the code VFW_E_NO_ALLOCATOR. (In the past I have successfully connected both H.264 and raw audio streams to the bridge).
Grasping at straws, I plugged in a pair of SampleGrabber filters between the Euresys card filters and the bridge sink filter, and -just like that- the connections between sample grabbers and sink were accepted.
However, I am not getting any packets on the other side of the bridge (the muxing graph). I inspected the running capture graph with GraphStudioNext and somehow the sample grabbers appear detached from my graph, even though I got successful confirmations when I connected them to the source filter!.
Here's the source code creating the graph.
void EuresysSourceBox::BuildGraph() {
HRESULT hRes;
CComPtr<IGraphBuilder> pGraph;
COM_CALL(pGraph.CoCreateInstance(CLSID_FilterGraph));
#ifdef REGISTER_IN_ROT
_rotEntry1 = FilterTools::RegisterGraphInROT(IntPtr(pGraph), "euresys graph");
#endif
// [*Video Source*]
String^ filterName = "Ux H.264 Visual Source";
Guid category = _GUIDToGuid((GUID)AM_KSCATEGORY_CAPTURE);
FilterMonikerList^ videoSourceList = FilterTools::GetFilterMonikersByName(category, filterName);
CComPtr<IBaseFilter> pVideoSource;
int monikerIndex = config->BoardIndex; // a filter instance will be retrieved for every installed board
clr_scoped_ptr<CComPtr<IMoniker>>^ ppVideoSourceMoniker = videoSourceList[monikerIndex];
COM_CALL((*ppVideoSourceMoniker->get())->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pVideoSource));
COM_CALL(pGraph->AddFilter(pVideoSource, L"VideoSource"));
// [Video Source]
//
// [*Audio Source*]
filterName = "Ux H.264 Audio Encoder";
FilterMonikerList^ audioSourceList = FilterTools::GetFilterMonikersByName(category, filterName);
CComPtr<IBaseFilter> pAudioSource;
clr_scoped_ptr<CComPtr<IMoniker>>^ ppAudioSourceMoniker = audioSourceList[monikerIndex];
COM_CALL((*ppAudioSourceMoniker->get())->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pAudioSource));
COM_CALL(pGraph->AddFilter(pAudioSource, L"AudioSource"));
CComPtr<IPin> pVideoCompressedOutPin(FilterTools::GetPin(pVideoSource, "Encoded"));
CComPtr<IPin> pAudioOutPin(FilterTools::GetPin(pAudioSource, "Audio"));
CComPtr<IBaseFilter> pSampleGrabber;
COM_CALL(pSampleGrabber.CoCreateInstance(CLSID_SampleGrabber));
COM_CALL(pGraph->AddFilter(pSampleGrabber, L"SampleGrabber"));
CComPtr<IPin> pSampleGrabberInPin(FilterTools::GetPin(pSampleGrabber, "Input"));
COM_CALL(pGraph->ConnectDirect(pVideoCompressedOutPin, pSampleGrabberInPin, NULL)); // DOES NOT FAIL!!
CComPtr<IBaseFilter> pSampleGrabber2;
COM_CALL(pSampleGrabber2.CoCreateInstance(CLSID_SampleGrabber));
COM_CALL(pGraph->AddFilter(pSampleGrabber2, L"SampleGrabber2"));
CComPtr<IPin> pSampleGrabber2InPin(FilterTools::GetPin(pSampleGrabber2, "Input"));
COM_CALL(pGraph->ConnectDirect(pAudioOutPin, pSampleGrabber2InPin, NULL)); // DOES NOT FAIL!!
// [Video Source]---
// |-->[*Bridge Sink*]
// [Audio Source]---
CComPtr<IPin> pSampleGrabberOutPin(FilterTools::GetPin(pSampleGrabber, "Output"));
CComPtr<IPin> pSampleGrabber2OutPin(FilterTools::GetPin(pSampleGrabber2, "Output"));
CreateGraphBridge(
IntPtr(pGraph),
IntPtr(pSampleGrabberOutPin),
IntPtr(pSampleGrabber2OutPin)
);
// Root graph to parent object
_ppCaptureGraph.reset(new CComPtr<IGraphBuilder>(pGraph));
}
COM_CALL is my HRESULT checking macro, it will raise a managed exception if the result is other than S_OK. So the connection between pins succeeded, but here is how the graph looks disjointed when it is running:
So, I have three questions:
1) What could VFW_E_NO_ALLOCATOR mean in this instance? (the source filter can be successfully connected to other components such as LAV Video decoder or ffdshow video decoder).
2) Is there a known workaround to circumvent the VFW_E_NO_ALLOCATOR problem?
3) Is it possible that a filter gets disconnected at runtime as it seems to be happening in my case?

I found a reference by Geraint Davies giving a reason as to why the GMFBridge sink filter may be rejecting the connection.
It looks as though the parser is insisting on using its own allocator
-- this is common with parsers where the output samples are merely pointers into the input samples. However, the bridge cannot implement
suspend mode without using its own allocator, so a copy is required.
With this information, I decided to create an ultra simple CTransformFilter filter that simply accepts the allocator offered by the bridge and copies to the output sample whatever comes in from the input sample. I am not 100% sure that what I did was right, but it is working now. I could successfully plug-in the Euresys card as part of my capture infrastructure.
For reference, if anyone experiences something similar, here is the code of the filter I created:
class SampleCopyGeneratorFilter : public CTransformFilter {
protected:
HRESULT CheckInputType(const CMediaType* mtIn) override { return S_OK; }
HRESULT GetMediaType(int iPosition, CMediaType* pMediaType) override;
HRESULT CheckTransform(const CMediaType *mtIn, const CMediaType *mtOut) override { return S_OK; }
HRESULT DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp) override;
HRESULT Transform(IMediaSample *pSource, IMediaSample *pDest) override;
public:
SampleCopyGeneratorFilter();
};
//--------------------------------------------------------------------------------------------------------------------
SampleCopyGeneratorFilter::SampleCopyGeneratorFilter()
: CTransformFilter(NAME("SampleCopyGeneratorFilter"), NULL, GUID_NULL)
{
}
HRESULT SampleCopyGeneratorFilter::GetMediaType(int iPosition, CMediaType* pMediaType) {
HRESULT hRes;
ASSERT(m_pInput->IsConnected());
if( iPosition < 0 )
return E_INVALIDARG;
CComPtr<IPin> connectedTo;
COM_CALL(m_pInput->ConnectedTo(&connectedTo));
CComPtr<IEnumMediaTypes> pMTEnumerator;
COM_CALL(connectedTo->EnumMediaTypes(&pMTEnumerator));
AM_MEDIA_TYPE* pIteratedMediaType;
for( int i = 0; i <= iPosition; i++ ) {
if( pMTEnumerator->Next(1, &pIteratedMediaType, NULL) != S_OK )
return VFW_S_NO_MORE_ITEMS;
if( i == iPosition )
*pMediaType = *pIteratedMediaType;
DeleteMediaType(pIteratedMediaType);
}
return S_OK;
}
HRESULT SampleCopyGeneratorFilter::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp) {
HRESULT hRes;
AM_MEDIA_TYPE mt;
COM_CALL(m_pInput->ConnectionMediaType(&mt));
try {
BITMAPINFOHEADER* pBMI = HEADER(mt.pbFormat);
pProp->cbBuffer = DIBSIZE(*pBMI); // format is compressed, uncompressed size should be enough
if( !pProp->cbAlign )
pProp->cbAlign = 1;
pProp->cbPrefix = 0;
pProp->cBuffers = 4;
ALLOCATOR_PROPERTIES actualProperties;
COM_CALL(pAlloc->SetProperties(pProp, &actualProperties));
if( pProp->cbBuffer > actualProperties.cbBuffer )
return E_FAIL;
return S_OK;
} finally{
FreeMediaType(mt);
}
}
HRESULT SampleCopyGeneratorFilter::Transform(IMediaSample *pSource, IMediaSample *pDest) {
HRESULT hRes;
BYTE* pBufferIn;
BYTE* pBufferOut;
long destSize = pDest->GetSize();
long dataLen = pSource->GetActualDataLength();
if( dataLen > destSize )
return VFW_E_BUFFER_OVERFLOW;
COM_CALL(pSource->GetPointer(&pBufferIn));
COM_CALL(pDest->GetPointer(&pBufferOut));
memcpy(pBufferOut, pBufferIn, dataLen);
pDest->SetActualDataLength(dataLen);
pDest->SetSyncPoint(pSource->IsSyncPoint() == S_OK);
return S_OK;
}
Here is how I inserted the filter in the capture graph:
CComPtr<IPin> pAACEncoderOutPin(FilterTools::GetPin(pAACEncoder, "XForm Out"));
CComPtr<IPin> pVideoSourceCompressedOutPin(FilterTools::GetPin(pVideoSource, "Encoded"));
CComPtr<IBaseFilter> pSampleCopier;
pSampleCopier = new SampleCopyGeneratorFilter();
COM_CALL(pGraph->AddFilter(pSampleCopier, L"SampleCopier"));
CComPtr<IPin> pSampleCopierInPin(FilterTools::GetPin(pSampleCopier, "XForm In"));
COM_CALL(pGraph->ConnectDirect(pVideoSourceCompressedOutPin, pSampleCopierInPin, NULL));
CComPtr<IPin> pSampleCopierOutPin(FilterTools::GetPin(pSampleCopier, "XForm Out"));
CreateGraphBridge(
IntPtr(pGraph),
IntPtr(pSampleCopierOutPin),
IntPtr(pAACEncoderOutPin)
);
Now, I still have no idea why inserting the sample grabber instead did not work and resulted in detached graphs. I corroborated this quirk by examining the graphs with Graphedit Plus too. If anyone can offer me an explanation, I would be very grateful indeed.

Related

How do I create an Inter App MIDI In port

I will program an inter App MIDI In Port in my Arranger App, that can be accessed by other MIDI App's. I would appreciate very much to get some sample code. I built a virtual MIDI In port like this, but how to make it visible for other App's:
MIDIClientRef virtualMidi;
result = MIDIClientCreate(CFSTR("Virtual Client"), MyMIDINotifyProc, NULL, &virtualMidi);
You need to use MIDIDestinationCreate, which will be visible to other MIDI Clients. You need to provide a MIDIReadProc callback that will be notified when a MIDI event arrives to your MIDI Destination. You may create another MIDI Input Port as well, with the same callback, that you can connect yourself from within your own program to an external MIDI Source.
Here is an example (in C++):
void internalCreate(CFStringRef name)
{
OSStatus result = noErr;
result = MIDIClientCreate( name , nullptr, nullptr, &m_client );
if (result != noErr) {
qDebug() << "MIDIClientCreate() err:" << result;
return;
}
result = MIDIDestinationCreate ( m_client, name, MacMIDIReadProc, (void*) this, &m_endpoint );
if (result != noErr) {
qDebug() << "MIDIDestinationCreate() err:" << result;
return;
}
result = MIDIInputPortCreate( m_client, name, MacMIDIReadProc, (void *) this, &m_port );
if (result != noErr) {
qDebug() << "MIDIInputPortCreate() error:" << result;
return;
}
}
Another example, in ObjectiveC from symplesynth
- (id)initWithName:(NSString*)newName
{
PYMIDIManager* manager = [PYMIDIManager sharedInstance];
MIDIEndpointRef newEndpoint;
OSStatus error;
SInt32 newUniqueID;
// This makes sure that we don't get notified about this endpoint until after
// we're done creating it.
[manager disableNotifications];
MIDIDestinationCreate ([manager midiClientRef], (CFStringRef)newName, midiReadProc, self, &newEndpoint);
// This code works around a bug in OS X 10.1 that causes
// new sources/destinations to be created without unique IDs.
error = MIDIObjectGetIntegerProperty (newEndpoint, kMIDIPropertyUniqueID, &newUniqueID);
if (error == kMIDIUnknownProperty) {
newUniqueID = PYMIDIAllocateUniqueID();
MIDIObjectSetIntegerProperty (newEndpoint, kMIDIPropertyUniqueID, newUniqueID);
}
MIDIObjectSetIntegerProperty (newEndpoint, CFSTR("PYMIDIOwnerPID"), [[NSProcessInfo processInfo] processIdentifier]);
[manager enableNotifications];
self = [super initWithMIDIEndpointRef:newEndpoint];
ioIsRunning = NO;
return self;
}
Ports can't be discovered from the API, but sources and destinations can. You want to create a MIDISource or MIDIDestination so that MIDI clients can call MIDIGetNumberOfDestinations/MIDIGetDestination or MIDIGetNumberOfSources/MIDIGetSource and discover it.
FYI, there is no need to do what you are planning to do on macOS because the IAC driver already does it. If this is for iOS, these are the steps to follow:
Create at least one MIDI Client.
Create a MIDIInputPort with a read block for I/O.
Use MIDIPortConnectSource to attach the input port to every MIDI Source of interest.
[From now, every MIDI message received by the source will come to your read block.]
If you want to resend this data to a different destination, you'll need to have created a MIDIOutputPort as well. Use MIDISend with that port to the desired MIDI Destination.

How to put BG96 on power save mode between sending messages to Azure IoT Hub over HTTP

I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(&gtemp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.

Linux-Xenomai Serial Communication using xeno_16550A module

I'm starter of RTOS and I'm using Xenomai v2.6.3.
I'm trying to get some data using Serial communication.
I did my best on the task following the xenomai's guide and open sources, but it doesn't work well.
the link of the guide --> (https://xenomai.org//serial-16550a-driver/)
I just followed the sequence to use the module xeno_16550A. (with port io = 0x2f8 and irq=3)
I followed open source http://www.acadis.org/pages/captain.at/serial-port-example
It works well in write task, but read task doesn't work well.
It gave me the error sentence with error while RTSER_RTIOC_WAIT_EVENT, code -110 (it means connection timed out)
Moreover I checked the irq number3 by typing command 'cat /proc/xenomai/irq', but the interrupt number doesn't increase.
In my case, I don't need to write data, so I erase the write task code.
The read task proc is follow
void read_task_proc(void *arg) {
int ret;
ssize_t red = 0;
struct rtser_event rx_event;
while (1) {
/* waiting for event */
ret = rt_dev_ioctl(my_fd, RTSER_RTIOC_WAIT_EVENT, &rx_event );
if (ret) {
printf(RTASK_PREFIX "error while RTSER_RTIOC_WAIT_EVENT, code %d\n",ret);
if (ret == -ETIMEDOUT)
continue;
break;
}
unsigned char buf[1];
red = rt_dev_read(my_fd, &buf, 1);
if (red < 0 ) {
printf(RTASK_PREFIX "error while rt_dev_read, code %d\n",red);
} else {
printf(RTASK_PREFIX "only %d byte received , char : %c\n",red,buf[0]);
}
}
exit_read_task:
if (my_state & STATE_FILE_OPENED) {
if (!close_file( my_fd, READ_FILE " (rtser)")) {
my_state &= ~STATE_FILE_OPENED;
}
}
printf(RTASK_PREFIX "exit\n");
}
I could guess the causes of the problem.
buffer size or buffer is already full when new data is received.
rx_interrupt doesn't work....
I want to check whether the two things are wrong or not, but How can I check?
Furthermore, does anybody know the cause of the problem? Please give me comments.

Using a Filter Audio Unit Effect in iOS5

I'm trying to use a remote IO connection and route the audio input through the built in filter effect (iOS 5 only) and then back out of the hardware. I can make it route straight from the input to the output but I can't get the filter to work. I'm not sure whether it's the filter Audio Unit or the routing that I've got wrong.
This bit is just my attempt at setting up the filter and changing the routing so that the data is processed by it.
Any help is appreciated.
// ******* BEGIN FILTER ********
NSLog(#"Begin filter");
// Creates Audio Component Description - Output Filter
AudioComponentDescription filterCompDesc;
filterCompDesc .componentType = kAudioUnitType_Effect;
filterCompDesc.componentSubType = kAudioUnitSubType_LowPassFilter;
filterCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
filterCompDesc.componentFlags = 1;
filterCompDesc.componentFlagsMask = 1;
// Create Filter Unit
AudioUnit lpFilterUnit;
AudioComponent filterComponent = AudioComponentFindNext(NULL, &filterCompDesc);
setupErr = AudioComponentInstanceNew(filterComponent, &lpFilterUnit);
NSAssert(setupErr == noErr, #"No instance of filter");
AudioUnitElement bus2 = 2;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Output, bus2, &oneFlag, sizeof(oneFlag));
AudioUnitElement bus3 = 3;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Input, bus3, &oneFlag, sizeof(oneFlag));
// ******** END FILTER ******** //
AudioUnitConnection hardInToLP;
hardInToLP.sourceAudioUnit = remoteIOunit;
hardInToLP.sourceOutputNumber = 1;
hardInToLP.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
AudioUnitConnection LPToHardOut;
LPToHardOut.sourceAudioUnit = lpFilterUnit;
LPToHardOut.sourceOutputNumber = 1;
LPToHardOut.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
/*
// Sets up the Audio Units Connection - new instance called connection
AudioUnitConnection connection;
// Connect Audio Input's out to Audio Out's in
connection.sourceAudioUnit = remoteIOunit;
connection.sourceOutputNumber = bus1;
connection.destInputNumber = bus0;
setupErr = AudioUnitSetProperty(remoteIOunit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, bus0, &connection, sizeof(connection));
*/
NSAssert(setupErr == noErr, #"No RIO connection");
A couple things going on here:
You're gonna help yourself a lot if you do an assert (or some sort of check-error-and-log-it) after every call that can return an OSStatus. That way you'll figure out how far you're getting. Probably also want to log the actual OSStatus value when it's != noErr, and then look it up (start in "Audio Unit Component Services Reference" in Xcode documentation viewer).
After you create the filter AudioUnit, I don't get what you're doing with the AudioUnitSetProperty() calls. The second parameter should be the name of a property (something that starts with kAudioUnitProperty...). That's almost certainly returning an error right there.
remoteIOunit only has two buses, and they have special meanings. bus 1 is input from the mic, bus 0 is output to hardware. Trying to connect to remote io input scope bus 3 is probably going to be another error
Suggest you roll back to when you had audio pass-through working. That would mean you had just remoteIO, and a connection from output scope / bus 1 to input scope / bus 0.
Then create the filter unit. Change your connections so you connect:
remoteIO output scope bus 1 to filter input scope bus 0
filter output scope bus 0 to remoteIO input scope bus 0
The other thing that's going to be a problem is that all these iOS 5 filters seem to want to use floating-point LPCM formats, which is not the canonical format your other units will default to. You may have to get the stream format from the filter unit (input or output scope are probably the same?) and then set that as the format that remoteIO output scope / bus 1 produces and remoteIO input scope / bus 0 accepts. Another option would be to introduce AUConverter units before and after the filter unit.
The first answer given here just saved me a lot more frustration. No where does the Apple documentation tell you that the file formats for the Effect units require floating point. I couldn't figure out why it kept failing to play my audio properly until I read this post. I followed the advice above and retrieved the stream format from the low pass filter unit, and used that to set up two converter units that I created (ie. set the output format of the pre filter converter, and the input format of the post filter converter. Once I did that and connected all the nodes together it started working as expected.
im trying to use a low pass filter and when trying to do as suggested aka set the format i keep getting an error "the operation could not be completed" what in this code is faulty?
After retrieving the lowpassUnit I also check for errors but there are none.
result = AudioUnitSetProperty(lowpassUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &stereoStreamFormat, sizeof (stereoStreamFormat));
if (noErr != result)
{
NSLog(#"%#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]);
return;
}
PS: If anyone knows of proper Audio unit documentation please share as the official documentation is really lacking

How to get array of float audio data from AudioQueueRef in iOS?

I'm working on getting audio into the iPhone in a form where I can pass it to a (C++) analysis algorithm. There are, of course, many options: the AudioQueue tutorial at trailsinthesand gets things started.
The audio callback, though, gives an AudioQueueRef, and I'm finding Apple's documentation thin on this side of things. Built-in methods to write to a file, but nothing where you actually peer inside the packets to see the data.
I need data. I don't want to write anything to a file, which is what all the tutorials — and even Apple's convenience I/O objects — seem to be aiming at. Apple's AVAudioRecorder (infuriatingly) will give you levels and write the data, but not actually give you access to it. Unless I'm missing something...
How to do this? In the code below there is inBuffer->mAudioData which is tantalizingly close but I can find no information about what format this 'data' is in or how to access it.
AudioQueue Callback:
void AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
static int count = 0;
RecordState* recordState = (RecordState*)inUserData;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
++count;
printf("Got buffer %d\n", count);
}
And the code to write the audio to a file:
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData); // THIS! This is what I want to look inside of.
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
// so you don't have to hunt them all down when you decide to switch to float:
#define AUDIO_DATA_TYPE_FORMAT SInt16
// the actual sample-grabbing code:
int sampleCount = inBuffer->mAudioDataBytesCapacity / sizeof(AUDIO_DATA_TYPE_FORMAT);
AUDIO_DATA_TYPE_FORMAT *samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData;
Then you have your (in this case SInt16) array samples which you can access from samples[0] to samples[sampleCount-1].
The above solution did not work for me, I was getting the wrong sample data itself.(an endian issue) If incase someone is getting wrong sample data in future, I hope this helps you :
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}