AudioQueueOutputCallback buffer data (during playback and recording) - objective-c

I have trouble with understanding what exactly is in buffer containing incoming data.
Is AudioQueueBufferRef refers to piece of data (from last callback call) or it is whole past data (since AudioQueue start) ?

Have you taken a look at the Apple developer documentation for this?
An audio queue buffer, newly available to fill because the playback audio queue has acquired its contents.
It also provides the queue that triggered the callback.

Related

Clear WebRTC Data Channel queue

I have been trying to use WebRTC Data Channel for a game, however, I am unable to consistently send live player data without hitting the queue size limit (8KB) after 50-70 secs of playing.
Sine the data is required to be real-time, I have no use for data that comes out of order. I have initialized the data channel with the following attributes:
negotiated: true,
id: id,
ordered: true,
maxRetransmits: 0,
maxPacketLifetime: 66
The MDN Docs said that the buffer cannot be altered in any way.
Is there anyway I can consistently send data without exceeding the buffer space? I don't mind purging the buffer space as it only contains data that has been clogged up over time.
NOTE: The data is transmitting until the buffer size exceeds the 8KB space.
EDIT: I forgot to add that this issue is only occurring when the two sides are on different networks. When both are within the same LAN, there is no buffering (since higher bandwidth, I presume). I tried to add multiple Data Channels (8 in parallel). However, this only increased the time before the failure occurred again. All 8 buffers were full. I also tried creating a new channel each time the buffer was close to being full and switched to the new DC while closing the previous one that was full, but I found out the hard way (reading Note in MDN Docs) that the buffer space is not released immediately, rather tries to transmit all data in the buffer taking away precious bandwidth.
Thanks in advance.
The maxRetransmits value is ignored if the maxPacketLifetime value is set; thus, you've configured your channel to resend packets for up to 66ms. For your application, it is probably better to use a pure unreliable channel by setting maxPacketLifetime to 0.
As Sean said, there is no way to flush the queue. What you can do is to drop packets before sending them if the channel is congested:
if(dc.bufferedAmount > 0)
return;
dc.send(data);
Finally, you should realise that buffering may happen in the network as well as at the sender: any router can buffer packets when it is congested, and many routers have very large buffers (this is called BufferBloat). The WebRTC stack should prevent you from buffering too much data in the network, but if WebRTC's behaviour is not aggressive enough for your needs, you will need to add explicit feedback from the sender to the receiver in order to avoid having too many packets in flight.
I don't believe you can flush the outbound buffer, you will probably need to watch the bufferedAmount and adjust what you are sending if it grows.
Maybe handle the retransmissions yourselves and discard old data if needed? WebRTC doesn't surface the SACKs from SCTP. So I think you will need to implement something yourself.
It's an interesting problem. Would love to hear the WebRTC W3C WorkGroup takes on it if exposing more info would make things easier for you.

Should epoll EDGE triggered work when you read partial data?

I want to be notified when an USB mouse is disconnected (not just having the read fail). I use epoll with the flags
EPOLLIN | EPOLLERR | EPOLLRDHUP | EPOLLET
I used
read(fd, struct input_event, sizeof input_event)
I wait for events from the mouse. All is good and working fine, until I click on a mouse button. This generates two events at the same time. One is the EV_MSC/MSC_SCAN event and the other is EV_KEY/BTN_LEFT. If I read only one event (ie. read with buffer of len 24) I get another EPOLL notication and the the read gets the EV_MSC event again. If I read with a buffer of size 48 I get both events.
What is the correct way to handle this case. Should I not keep on reading until I get EAGAIN in the read event handler?
Oops. My bad. Turned out I was reading from a descriptor that had no data (uinput device)
The sole difference between level-triggered and edge-triggered is that edge-triggered will only notify you when new data is queued while level-triggered will keep notifying you until you read all the data.
If you're going to use edge triggering, you should make sure to read all the data after you get a notification because you are not guaranteed to get a new notification unless new data arrives. (There are some circumstances where you will get a notification, but it is not guaranteed and so it is an error to rely on it.)

What does the hasSpaceAvailable property on NSOutputStream mean?

I'm trying to wrap my head around the logic behind hasSpaceAvailable on NSOutputStream.
In my app, I'm sending large amounts of data (100MB) broken up into 4080byte chunks (hard limit) over a CFSocket managed by NSInput/output streams.
When I start writing the data, about a quarter way through hasSpaceAvailable suddenly becomes NO, and so I add the data to a queue. However, if I ignore that and try to write the data anyways, the write seems to work as the return value of write:maxLength: matches the maxLength parameter (4080).
What does the output stream have space for? As far as I can tell, when using UNIX/Berkley sockets there is no logic available to determine if the socket can be written to, you just write and determine if all of the data was written.
The documentation for the property states:
A boolean value that indicates whether the receiver can be written to. (read-only)
YES if the receiver can be written to or if a write must be attempted in order to determine if space is available, NO otherwise.
In my example where I'm seeing a NO, what factor is causing this result when I can still write to that socket.
I think the hasSpaceAvailable property just returns YES if the stream has sent a "space available" stream event since the last time you called the write method. You shouldn't poll that property, and it arguably shouldn't even exist. Instead, you should wait for a stream event on the output stream that says that there's space available for writing instead.
When that stream event occurs, it means that the outgoing packet queue has at least one byte fewer than the maximum number of bytes that the socket is configured to allow you to queue up. In other words, a send() or write() system call on the socket is guaranteed to write at least one byte without blocking, and the socket is guaranteed to be in a nonblocking mode.
Note that after you write data, the stream will send another space available event immediately if the stream's buffer can take more data (or after it has sent some data if the buffer is full).

CFStreamCreateBoundPair streams lose data with small buffer size

I am attempting to create a streaming html parser with libxml2 in Objective-C. I have a NSURLConnection that downloads the data, and I have created in NSInputStream and NSOutputstream with CFStreamCreateBoundPairwith a small buffer size of 10 bytes. As data is received from the NSURLConnection I write it to the output stream. It appears when the amount of data received is larger than the buffer size the left over data is lost. Is this supposed to happen? From my understanding I thought the data would be queued and written in chunks the size of the buffer to the input stream.
CFStreamCreateBoundPair Reference
You need to ensure that all data from the received chunk is eventually written into the stream.
You might do this with a simple loop in the delegate method where you continuously write a portion of the received chunk until it is completely written into the stream. However, this may cause the thread where the delegate is running to block undeterminably: if the consumer is not ready to consume more bytes, the output stream will block when attempting to write more data.
Alternatively, you might dispatch the NSData object asynchronously to a queue where a block is doing the loop and writes all the data before it completes. However, this may cause your system to run out of memory if the consumer is slow and the data is large - since all NSData live on the dispatch queue until the block finished.
Both approaches have pros and cons. I tend to prefer the first, since there is no memory issue and the connection will buffer the incoming bytes up to a certain upper limit anyway - before it stops acknowledging more bytes.

Recording Audio on iPhone and Sending Over Network with NSOutputStream

I am writing an iPhone application that needs to record audio from the built-in microphone and then send that audio data to a server for processing.
The application uses a socket connection to connect to the server and Audio Queue Services to do the recording. What I am unsure of is when to actually send the data. Audio Queue Services fires a callback each time it has filled a buffer with some audio data. NSOutputStream fires an event each time it has space available.
My first thought was to send the data to the server on the Audio Queue callback. But it seems like this would run into a problem if the NSOutputStream does not have space available at that time.
Then I thought about buffering the data as it comes back from the Audio Queue and sending some each time the NSOutputStream fires a space available event. But this would seem to have a problem if the sending to the server gets ahead of the audio recording then there will be a situation where there is nothing to write on the space available event, so the event will not be fired again and the data transfer will effectivly be stalled.
So what is the best way to handle this? Should I have a timer that fires repeatedly and see if there is space available and there is data that needs to be sent? Also, I think I will need to do some thread synchronization so that I can take chunks of data out of my buffer to send across the network but also add chunks of data to the buffer as the recording proceeds without risking mangling my buffer.
You could use a ring buffer to hold a certain number of audio frames and drop frames if the buffer exceeds a certain size. When your stream-has-space-available callback gets called, pull a frame off the ring buffer and send it.
CHDataStructures provides a few ring-buffer (which it calls “circular buffer”) classes.