How to flush IN bulk endpoint buffer on device - usb

I'd like to make sure that on the device connected to Chrome (via WebUSB) IN endpoint doesn't contain messages from previous bulk transmission. I checked the API for the WebUSB:
https://wicg.github.io/webusb/
and I don't see any kind of flush function that would allow emptying buffer. I was thinking about reading data until device returns NAK - something like this:
/* #1 Make sure that IN endpoint contains no more data */
while (true) {
let result = await data.transferIn(1, 6);
if (result.length === 0) {
break;
}
}
/* #2 Send request */
await data.transferOut(0x81, message);
/* #3 Receive valid request */
let result = await data.transferIn(1, 6);
but unfortunately it looks that there is no good solution:
when there is no more data to read the transferIn() becomes blocking
function - so we cannot relay on async calling transferIn()
when transferIn() is called in the promise with timeout we can end
with more than one promise waiting for incoming data (which is bad
since we don't know which promise would receive data)
What would be the best approach for making sure the device IN endpoint contains no data?

The concept of an "IN bulk endpoint buffer" doesn't exist in the USB specification. Whether a device responds to an IN PID with DATA or NACK is entirely up the device. The answer may be generated on the fly based on the device state, or be fed from an internal buffer. There is no way for the host to know that the buffer is "empty". This is something that has to be defined at a higher protocol layer between the host and device.
For example, the host or device may indicate the amount of data that is expected to be transfered. The host then knows how much data it can expect to read from the IN endpoint before the current operation is complete. This is how the USB Mass Storage protocol works.
If the protocol between the host and device doesn't define these kinds of message boundaries the best way to flush to buffer is not to try. Instead, always be reading from the IN endpoint and interpret data as it is received. This may include using setTimeout() to checking whether a response has been received to a particular request within a given deadline. Data received not in response to a request could be discarded if it is uninteresting.

Related

WebUSB `USBTransferInResult`s seem to contain partial interrupt transfers

I'm using the WebUSB API in Chrome. I can pair with the device, claim an interface, and begin listening to an inbound interrupt endpoint that transfers three bytes every time I press a button, and three more when the button is released (it's a vendor-specific MIDI implementation).
The USBTransferInResult.data.buffer contains all of the bytes it should, except they are not provided transfer-wise. The bytes are being transferred one byte at a time, unless I do something to generate a bunch of data at the same time, in which case, there may be as many as three or four bytes in the same USBTransferInResult.
Note: The maximum packet size for this endpoint is 8. I've tried setting it to stuff like 1 and 256 with no effect.
If I concatenated all of the result buffers, I'd have the exact data I'm expecting, but surely the API should make each transfer (seemingly) atomic.
This could be the result of something funky that the vendor (Focusrite - it's a Novation product) does with their non-compliant MIDI implementation. I just assumed that the vendor would prefer to transfer each MIDI message as an atomic interrupt transfer (not three one-byte transfers in rapid succession), as it would simplify the driver and make it more robust. I cannot see the advantage of breaking these messages up.
Note: If I enable the experimental-usb-backend, my USB device stops appearing in the dialog (when requestDevice is invoked).
This is the code I'm testing it with:
let DEVICE = undefined;
const connect = async function() {
/* Initialize the device, assign it to the global variable,
claim Interface 1, then invoke `listen`. */
const filters = [{vendorId: 0x1235, productId: 0x0018}];
DEVICE = await navigator.usb.requestDevice({filters});
await DEVICE.open();
await DEVICE.selectConfiguration(1);
await DEVICE.claimInterface(1);
listen();
};
const listen = async function() {
/* Recursively, listen for each interrupt transfer from
Endpoint 4, asking for upto 8 bytes each time, and then
logging each transfer (as a regular array of numbers). */
const result = await DEVICE.transferIn(4, 8);
const data = new Uint8Array(result.data.buffer);
console.log(Array.from(data));
listen();
};
// Note: The are a few lines of UI code here that provide a
// button for invoking the `connect` function above, and
// another button that invokes the `close` method of
// the USB device.
Given this issue is not reproducible without the USB device, I don't want to report it as a bug, unless I'm sure that it is one. I was hoping somebody here could help me.
Have I misunderstood the way the WebUSB API works?
Is it reasonable to assume that the vendor may have intended to break MIDI messages into individual bytes?
On reflection, the way this works may be intentional.
The USB MIDI spec is very complicated, as it seeks to accommodate complex MIDI setups, which can constitute entire networks in their own right. The device I'm hacking (the Novation Twitch DJ controller) has no MIDI connectivity, so it would have been much easier for the Novation engineers to just pass each MIDI message as USB interrupt transfers.
As for way it streams the MIDI bytes as soon as they're ready, I'm assuming this simplified the hardware, and is intended to be interpreted like bytecode. Each MIDI message begins with a status byte that indicates the number of data bytes that will follow it (analogous to an opcode, followed by some immediates).
Note: Status bytes also have a leading 1, while data bytes have a leading 0, so they are easy to tell apart (and SysEx messages use specific start and end bytes).
In the end, it was simple enough to use the status bytes to indicate when to instantiate a new message, and what type it should be. I then implemented a set of MIDI message classes (NoteOn, Control, SysEx etc) that each know when they have the right number of bytes (to simplify the logic for each individual message).

What happens to client message if Server does not exist in UDP Socket programming?

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here
TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

How to handle EAGAIN case for TLS over SCTP streams using memory BIO interface

I'm using BIO memory interface to have TLS implemented over SCTP.
So at the client side, while sending out application data,
SSL_write() api encrypts the data and writes data to the associated write BIO interface.
Then the data from BIO interface is read to a output buffer using BIO_read() call and then
send out to the socket using sctp_sendmsg() api.
Similarly at the server side, while reading data from socket
sctp_recvmsg() api reads ecrypted message chunks from socket,
BIO_write() api writes it to the read BIO buffer, and
SSL_read() api decrypts the data read from the BIO.
The case i'm interested at is where at client side, steps 1 and 2 are done, and while doing 3, i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
Now when i do this, and later when steps 1, 2 and 3 at client side goes through fine, at the server side, openssl finds it that the record that it received has got a a bad_record_mac and closes the connection.
From googling i came to know that one possibility for it to happen is if TLS packets comes out of sequence, as MAC encoding has dependency on the previous packet encoded, and, TLS needs to have the packets delivered in the same order. So when i was cleaning up the data on EAGAIN i am dropping an SSL packet and then sending a next packet which is out of order (missing clarity here) ?
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
Can someone help me here with this EAGAIN handling ? I can't do an infinite wait to get around the issue, is there any other way out ?
... i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
If you get an EAGAIN on the socket you should try to send the same encrypted data later.
What you do instead is to throw the encrypted data away and ask the application to send the same plain data again. This means that these data get encrypted again. But encrypting plain data in SSL also includes a sequence number of the SSL frame and this sequence number is not the same as for the last SSL frame you throw away.
Thus, if you have thrown away the full SSL frame you are trying to send a new SSL frame with the next sequence number which does not fit the expected sequence number. If you've succeeded to send part of the previous SSL frame and thew away the rest then the new data you send will be considered part of the previous frame which means that the HMAC of the frame will not match.
Thus, don't throw away the encrypted data but try to resent these instead of letting the upper layer resent the plain data.
Select for writability.
Repeat the send.
If the send was incomplete, remove the part of the buffer that got sent and go to (1).
So whatever data i've read from the BIO buffer, i clean it up
I don't know what this means. You're sending, not receiving.
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
That's exactly what you should do. I can't imagine what else you could possibly have been doing instead, and your description of it doesn't make any sense.

WCF service clear buffer

I am currently working on a WCF service and have a small issue. The service is a Polling Duplex service. I initiate data transfer through a message sent to the server. Then the server sends large packets of data back through the callback channel to the client fairly quickly.
To stop the I send a message to the sever telling it do stop. Then it sends a message over the callback channel acknowledging this to let the client know.
The problem is that a bunch of packets of data get buffered up to be sent through the callback channel to the client. This causes a long wait for the acknowledgement to make it back because it has to wait for all the data to go through first.
Is there any way that I can clear the buffer for the callback channel on the server side? I don't need to worry about loosing the data, I just need to throw it away and immediately send the acknowledgement message.
I'm not sure if this can lead you into the correct direction or not... I have a similar service where when I look in my Subscribe() method, I can access this:
var context = OperationContext.Current;
var sessionId = context.SessionId;
var currentClient = context.GetCallbackChannel<IClient>();
context.OutgoingMessageHeaders.Clear();
context.OutgoingMessageProperties.Clear();
Now, if you had a way of using your IClient object, and to access the context where you got the instance of IClient from (resolve it's context), could running the following two statements do what you want?
context.OutgoingMessageHeaders.Clear();
context.OutgoingMessageProperties.Clear();
Just a quick ramble from my thoughts. Would love to know if this would fix it or not, for personal information if nothing else. Could you cache the OperationContext as part of a SubscriptionObject which would contain 2 properties, the first being for the OperationContext, and the second being your IClient object.

how to timeout periodically in libpcap packet receiving functions

I found this post in stackoverflow.com
listening using Pcap with timeout
I am facing a similar (but different) problem: what is the GENERIC (platform-independent) method to timeout periodically when receiving captured packets by using libpcap packet receiving functions?
Actually, I am wondering if it is possible to periodically timeout from the pcap_dispatch(pcap_t...) / pcap_next_ex(pcap_t...)? If that is possible, I can use them just like using the classic select(...timeout) function ( http://linux.die.net/man/2/select ).
In addition, from the official webpage ( http://www.tcpdump.org/pcap3_man.html ), I found the original timeout mechanism is considered buggy and platform-specific (This is bad, since my program may run on different Linux and Unix boxes):
"... ... to_ms specifies the read timeout in milliseconds. The read timeout is used to arrange that the read not necessarily return immediately when a packet is seen, but that it wait for some amount of time to allow more packets to arrive and to read multiple packets from the OS kernel in one operation. Not all platforms support a read timeout; on platforms that don't, the read timeout is ignored ... ...
NOTE: when reading a live capture, pcap_dispatch() will not necessarily return when the read times out; on some platforms, the read timeout isn't supported, and, on other platforms, the timer doesn't start until at least one packet arrives. This means that the read timeout should NOT be used in, for example, an interactive application, to allow the packet capture loop to "poll" for user input periodically, as there's no guarantee that pcap_dispatch() will return after the timeout expires... ..."
Therefore, I guess I need to implement the GENERIC (platform-independent) timeout mechanism by myself like below?
create a pcap_t structure with pcap_open_live().
set it in nonblocking mode with pcap_setnonblock(pcap_t...).
poll this nonblocking pcap_t with registered OS timer like:
register OS timer_x, and reset timer_x;
while(1) {
if(timer_x times out)
{do something that need to be done periodically; reset timer_x;}
poll pcap_t by calling pcap_dispatch(pcap_t...)/pcap_next_ex(pcap_t...) to receive some packets;
do something with these packets;
}//end of while(1)
Regards,
DC
You can get the handle with pcap_fileno() and select() it.
There's a sample here in OfferReceiver::Listen().