Can I use a smaller buffer for NSFileHandleReadCompletionNotification? - objective-c

I'm reading NSFileHandleReadCompletionNotification messages from the NSNotificationCenter to receive messages from an NSTask. The problem is that the command line program I'm calling is relatively slow to output lines, and it seems that the NSFileHandleReadCompletionNotification message gets posted relatively infrequently (I guess when the buffer fills up).
Is there another Notification message I can use that would post a notification or every line, or is there a way to make the buffer smaller?
Edit: To be clear, I read that the buffer size is "imited to the buffer size of the underlying operating system" in the NSFileHandle documentation, so, I'm hoping there's some other trick.

If you read from NSFileHandle by
- (void)readInBackgroundAndNotify
method (right?), and parse data in the NSFileHandleReadCompletionNotification handler - so, the buffer size isn't limited - all "availableData" will be read in the background before you receive the notification, then you call the readInBackgroundAndNotify again to fetch next portion...
However I believe your issue happens due to well known I/O buffering technology.
You should turn buffering Off at the task-command side.
Example, if you call a Perl script, just put the line
$|=1;
or
use IO::Handle;
STDOUT->autoflush(1);
STDERR->autoflush(1);
near to the top of the script.
For a C-program - use setvbuf function to set buffer size to zero.

Related

Sending and receiving repeated commands to a serial instrument with LabVIEW

I'm writing a program in LabVIEW 2014 in order to control a linear actuator. The program is very simple, it sets a speed and then runs the subVIs to move the actuator back and forth.
There is a case structure inside a while loop so it would stop when a desired number or iterations is reached. The problem is that the iteration count of the while loop occurs faster than the execution of the program inside the case structure, and therefore the program stops before all the cycles of movement have been completed.
send pulses subVI:
activate subVI:
I tried different time delays in different parts of the code, but none of that worked. I think that the issue is that the while loop iterations run faster than the code of the case structure and somehow I need to slow it down. Or maybe I'm wrong and it is a complete different thing.
Here is the link of the actuator documentation:
https://jp.optosigma.com/html/en_jp/software/motorize/manual_en/SRC-101_InstructionManual_Ver1_1_EN.pdf
Welcome to the fun and infuriating world of interfacing to serial instruments.
Each iteration of a LabVIEW loop can only complete once all the code inside the loop structure has completed, so it's not possible that 'the while loop iterations run faster than the code of the case structure'. There's nothing explicitly wrong with any of your code, but evidently it isn't doing what you expected it to. The way to approach developing an instrument driver is always to start with the simplest case (e.g. one single movement of your actuator), get that working, and build up from there.
The documentation for an instrument's serial interface is rarely perfect and yours is no exception, but it does tell us that
every command is acknowledged by a response, and
you should not send a new command until you have received the response from the previous command.
Your code to send commands and receive the response looks OK. A VISA Read operation will read bytes from the computer's serial buffer until either the number of bytes to read is reached, or a byte matching the termination char is read, or the timeout expires. The manual implies that the instrument's responses are followed by the CR and LF characters, and the default configuration of the serial port in LabVIEW is to terminate each read when an LF is received, so you shouldn't need a time delay between each write and the following read; the instrument's response will be received into the buffer by the OS, then your code will read it out and return it as soon as it hits the LF.
What isn't completely clear from the manual is how the instrument responds to the activation command, G: - does it
Return the acknowledgement immediately, then execute the movement: you can check whether the movement is finished using the status command !:, or
Execute the movement, then return the acknowledgement to show that it's finished.
I think it's 1. but that's the first thing I would check. Unless all your movements are completed in less than 500 ms then I think this is what is wrong here: your program receives the acknowledgement then moves straight on to send the next command, but the actuator is still moving and not ready. In this case you have two options:
add a time delay after the read, calculated to be long enough for the actuator move to finish - this would be easiest, but potentially unreliable
in a While loop after you have got the acknowledgement of the G: command, send the !: command and check the response until you get R for 'ready'. (Remember that the acknowledgement string you receive will also have the CRLF on the end.) Use a time delay in this loop so you don't bombard the instrument with status checks - maybe something like 200 to 1000 ms would be suitable.
If it's case 2. then you would also have two options:
configure your serial port with a read timeout long enough to cover the longest move operation, then the read operation will just block until the acknowledgement is received - again this is the quick and dirty way, or
configure a short timeout, say 1000 ms, and place the read in a While loop that repeats until the acknowledgement is received or too many timeouts have occurred. Note that a timeout is considered an error, so you will have to turn off automatic error handling for the VI and instead test the error wire out of the VISA Read, discard the timeout error and handle any other error yourself.
Just as a general tip, whenever you pass an error wire into a loop and out again, I would use a shift register. That way if one iteration generates an error, the next iteration will see that error and fail immediately, so (for example) if communication fails you don't have to wait for the read timeouts to expire multiple times before your code can exit.
You'll probably have to do some experimenting and referring to LabVIEW help to get this fully working but hopefully this is enough to get you going.

Clear WebRTC Data Channel queue

I have been trying to use WebRTC Data Channel for a game, however, I am unable to consistently send live player data without hitting the queue size limit (8KB) after 50-70 secs of playing.
Sine the data is required to be real-time, I have no use for data that comes out of order. I have initialized the data channel with the following attributes:
negotiated: true,
id: id,
ordered: true,
maxRetransmits: 0,
maxPacketLifetime: 66
The MDN Docs said that the buffer cannot be altered in any way.
Is there anyway I can consistently send data without exceeding the buffer space? I don't mind purging the buffer space as it only contains data that has been clogged up over time.
NOTE: The data is transmitting until the buffer size exceeds the 8KB space.
EDIT: I forgot to add that this issue is only occurring when the two sides are on different networks. When both are within the same LAN, there is no buffering (since higher bandwidth, I presume). I tried to add multiple Data Channels (8 in parallel). However, this only increased the time before the failure occurred again. All 8 buffers were full. I also tried creating a new channel each time the buffer was close to being full and switched to the new DC while closing the previous one that was full, but I found out the hard way (reading Note in MDN Docs) that the buffer space is not released immediately, rather tries to transmit all data in the buffer taking away precious bandwidth.
Thanks in advance.
The maxRetransmits value is ignored if the maxPacketLifetime value is set; thus, you've configured your channel to resend packets for up to 66ms. For your application, it is probably better to use a pure unreliable channel by setting maxPacketLifetime to 0.
As Sean said, there is no way to flush the queue. What you can do is to drop packets before sending them if the channel is congested:
if(dc.bufferedAmount > 0)
return;
dc.send(data);
Finally, you should realise that buffering may happen in the network as well as at the sender: any router can buffer packets when it is congested, and many routers have very large buffers (this is called BufferBloat). The WebRTC stack should prevent you from buffering too much data in the network, but if WebRTC's behaviour is not aggressive enough for your needs, you will need to add explicit feedback from the sender to the receiver in order to avoid having too many packets in flight.
I don't believe you can flush the outbound buffer, you will probably need to watch the bufferedAmount and adjust what you are sending if it grows.
Maybe handle the retransmissions yourselves and discard old data if needed? WebRTC doesn't surface the SACKs from SCTP. So I think you will need to implement something yourself.
It's an interesting problem. Would love to hear the WebRTC W3C WorkGroup takes on it if exposing more info would make things easier for you.

What does the hasSpaceAvailable property on NSOutputStream mean?

I'm trying to wrap my head around the logic behind hasSpaceAvailable on NSOutputStream.
In my app, I'm sending large amounts of data (100MB) broken up into 4080byte chunks (hard limit) over a CFSocket managed by NSInput/output streams.
When I start writing the data, about a quarter way through hasSpaceAvailable suddenly becomes NO, and so I add the data to a queue. However, if I ignore that and try to write the data anyways, the write seems to work as the return value of write:maxLength: matches the maxLength parameter (4080).
What does the output stream have space for? As far as I can tell, when using UNIX/Berkley sockets there is no logic available to determine if the socket can be written to, you just write and determine if all of the data was written.
The documentation for the property states:
A boolean value that indicates whether the receiver can be written to. (read-only)
YES if the receiver can be written to or if a write must be attempted in order to determine if space is available, NO otherwise.
In my example where I'm seeing a NO, what factor is causing this result when I can still write to that socket.
I think the hasSpaceAvailable property just returns YES if the stream has sent a "space available" stream event since the last time you called the write method. You shouldn't poll that property, and it arguably shouldn't even exist. Instead, you should wait for a stream event on the output stream that says that there's space available for writing instead.
When that stream event occurs, it means that the outgoing packet queue has at least one byte fewer than the maximum number of bytes that the socket is configured to allow you to queue up. In other words, a send() or write() system call on the socket is guaranteed to write at least one byte without blocking, and the socket is guaranteed to be in a nonblocking mode.
Note that after you write data, the stream will send another space available event immediately if the stream's buffer can take more data (or after it has sent some data if the buffer is full).

Does -[NSInputStream read:maxLength:] block?

I can't seem to find the answer to this anywhere, but does read:maxLength: on NSInputStream block until data is available or there is an error, or do I need to poll on hasBytesAvailable before attempting to read?
Yes, read:maxLength: blocks until after at least one byte is available, or if an error occurred or if the stream reached EOS. It will also block until after the stream is opened.
Whether you want to poll or if you are fine with blocking or if you want to implement the stream delegates is up to you. It is recommended to use the stream delegates.

CFStreamCreateBoundPair streams lose data with small buffer size

I am attempting to create a streaming html parser with libxml2 in Objective-C. I have a NSURLConnection that downloads the data, and I have created in NSInputStream and NSOutputstream with CFStreamCreateBoundPairwith a small buffer size of 10 bytes. As data is received from the NSURLConnection I write it to the output stream. It appears when the amount of data received is larger than the buffer size the left over data is lost. Is this supposed to happen? From my understanding I thought the data would be queued and written in chunks the size of the buffer to the input stream.
CFStreamCreateBoundPair Reference
You need to ensure that all data from the received chunk is eventually written into the stream.
You might do this with a simple loop in the delegate method where you continuously write a portion of the received chunk until it is completely written into the stream. However, this may cause the thread where the delegate is running to block undeterminably: if the consumer is not ready to consume more bytes, the output stream will block when attempting to write more data.
Alternatively, you might dispatch the NSData object asynchronously to a queue where a block is doing the loop and writes all the data before it completes. However, this may cause your system to run out of memory if the consumer is slow and the data is large - since all NSData live on the dispatch queue until the block finished.
Both approaches have pros and cons. I tend to prefer the first, since there is no memory issue and the connection will buffer the incoming bytes up to a certain upper limit anyway - before it stops acknowledging more bytes.