If webrtc prohibits audio, will it still send empty packets? - webrtc

mediaStream.getAudioTracks()[0].enabled = false;
but ifound the webrtc can also send audio data,how to not send audio data when diaable it.
i want to save battery

Yes, if you set track.enabled = false, empty frames are still being sent. Though, bitrate is going to be really low, in my case it's ~1 kbit/sec for an audio track. Also, browser stops capturing audio frames from the microphone, so the only overhead you have is ~50 empty packets being sent each second.
If you still want to get rid of this small overhead, the only way to do it is to call track.stop() and remove it from RTCPeerConnection.
The downside is that when you need to unmute, you will need to getUserMedia, addTrack and then do another offer/answer exchange. So it's going to be slower that just calling track.enabled = true.

Related

Clear WebRTC Data Channel queue

I have been trying to use WebRTC Data Channel for a game, however, I am unable to consistently send live player data without hitting the queue size limit (8KB) after 50-70 secs of playing.
Sine the data is required to be real-time, I have no use for data that comes out of order. I have initialized the data channel with the following attributes:
negotiated: true,
id: id,
ordered: true,
maxRetransmits: 0,
maxPacketLifetime: 66
The MDN Docs said that the buffer cannot be altered in any way.
Is there anyway I can consistently send data without exceeding the buffer space? I don't mind purging the buffer space as it only contains data that has been clogged up over time.
NOTE: The data is transmitting until the buffer size exceeds the 8KB space.
EDIT: I forgot to add that this issue is only occurring when the two sides are on different networks. When both are within the same LAN, there is no buffering (since higher bandwidth, I presume). I tried to add multiple Data Channels (8 in parallel). However, this only increased the time before the failure occurred again. All 8 buffers were full. I also tried creating a new channel each time the buffer was close to being full and switched to the new DC while closing the previous one that was full, but I found out the hard way (reading Note in MDN Docs) that the buffer space is not released immediately, rather tries to transmit all data in the buffer taking away precious bandwidth.
Thanks in advance.
The maxRetransmits value is ignored if the maxPacketLifetime value is set; thus, you've configured your channel to resend packets for up to 66ms. For your application, it is probably better to use a pure unreliable channel by setting maxPacketLifetime to 0.
As Sean said, there is no way to flush the queue. What you can do is to drop packets before sending them if the channel is congested:
if(dc.bufferedAmount > 0)
return;
dc.send(data);
Finally, you should realise that buffering may happen in the network as well as at the sender: any router can buffer packets when it is congested, and many routers have very large buffers (this is called BufferBloat). The WebRTC stack should prevent you from buffering too much data in the network, but if WebRTC's behaviour is not aggressive enough for your needs, you will need to add explicit feedback from the sender to the receiver in order to avoid having too many packets in flight.
I don't believe you can flush the outbound buffer, you will probably need to watch the bufferedAmount and adjust what you are sending if it grows.
Maybe handle the retransmissions yourselves and discard old data if needed? WebRTC doesn't surface the SACKs from SCTP. So I think you will need to implement something yourself.
It's an interesting problem. Would love to hear the WebRTC W3C WorkGroup takes on it if exposing more info would make things easier for you.

WebUSB `USBTransferInResult`s seem to contain partial interrupt transfers

I'm using the WebUSB API in Chrome. I can pair with the device, claim an interface, and begin listening to an inbound interrupt endpoint that transfers three bytes every time I press a button, and three more when the button is released (it's a vendor-specific MIDI implementation).
The USBTransferInResult.data.buffer contains all of the bytes it should, except they are not provided transfer-wise. The bytes are being transferred one byte at a time, unless I do something to generate a bunch of data at the same time, in which case, there may be as many as three or four bytes in the same USBTransferInResult.
Note: The maximum packet size for this endpoint is 8. I've tried setting it to stuff like 1 and 256 with no effect.
If I concatenated all of the result buffers, I'd have the exact data I'm expecting, but surely the API should make each transfer (seemingly) atomic.
This could be the result of something funky that the vendor (Focusrite - it's a Novation product) does with their non-compliant MIDI implementation. I just assumed that the vendor would prefer to transfer each MIDI message as an atomic interrupt transfer (not three one-byte transfers in rapid succession), as it would simplify the driver and make it more robust. I cannot see the advantage of breaking these messages up.
Note: If I enable the experimental-usb-backend, my USB device stops appearing in the dialog (when requestDevice is invoked).
This is the code I'm testing it with:
let DEVICE = undefined;
const connect = async function() {
/* Initialize the device, assign it to the global variable,
claim Interface 1, then invoke `listen`. */
const filters = [{vendorId: 0x1235, productId: 0x0018}];
DEVICE = await navigator.usb.requestDevice({filters});
await DEVICE.open();
await DEVICE.selectConfiguration(1);
await DEVICE.claimInterface(1);
listen();
};
const listen = async function() {
/* Recursively, listen for each interrupt transfer from
Endpoint 4, asking for upto 8 bytes each time, and then
logging each transfer (as a regular array of numbers). */
const result = await DEVICE.transferIn(4, 8);
const data = new Uint8Array(result.data.buffer);
console.log(Array.from(data));
listen();
};
// Note: The are a few lines of UI code here that provide a
// button for invoking the `connect` function above, and
// another button that invokes the `close` method of
// the USB device.
Given this issue is not reproducible without the USB device, I don't want to report it as a bug, unless I'm sure that it is one. I was hoping somebody here could help me.
Have I misunderstood the way the WebUSB API works?
Is it reasonable to assume that the vendor may have intended to break MIDI messages into individual bytes?
On reflection, the way this works may be intentional.
The USB MIDI spec is very complicated, as it seeks to accommodate complex MIDI setups, which can constitute entire networks in their own right. The device I'm hacking (the Novation Twitch DJ controller) has no MIDI connectivity, so it would have been much easier for the Novation engineers to just pass each MIDI message as USB interrupt transfers.
As for way it streams the MIDI bytes as soon as they're ready, I'm assuming this simplified the hardware, and is intended to be interpreted like bytecode. Each MIDI message begins with a status byte that indicates the number of data bytes that will follow it (analogous to an opcode, followed by some immediates).
Note: Status bytes also have a leading 1, while data bytes have a leading 0, so they are easy to tell apart (and SysEx messages use specific start and end bytes).
In the end, it was simple enough to use the status bytes to indicate when to instantiate a new message, and what type it should be. I then implemented a set of MIDI message classes (NoteOn, Control, SysEx etc) that each know when they have the right number of bytes (to simplify the logic for each individual message).

Explanation of NetworkStream.Read behaviour needed

I am consuming real-time data from a network stream using a blocking read as follows:
Do
NetworkStream.Read(Bytes, 0, ReceiveBufferSize)
'Do stuff with data here
Loop
Watching packets come in on the wire in Wireshark, I see that sometimes when a new packet comes in, .NET sees it immediately and unblocks, letting me process it. Other times, multiple packets will come in on the wire before the NetworkStream.Read unblocks and returns the whole lot in one go - I've seen up to 8 packets buffer before the NetworkStream read unblocks.
Is this expected behaviour? Is there a way to grab and process each packet immediately as it is received across the wire? Will an Async receive model make any difference here? Or am I just fundamentally misunderstanding the way that TCP streams work?

Recording Audio on iPhone and Sending Over Network with NSOutputStream

I am writing an iPhone application that needs to record audio from the built-in microphone and then send that audio data to a server for processing.
The application uses a socket connection to connect to the server and Audio Queue Services to do the recording. What I am unsure of is when to actually send the data. Audio Queue Services fires a callback each time it has filled a buffer with some audio data. NSOutputStream fires an event each time it has space available.
My first thought was to send the data to the server on the Audio Queue callback. But it seems like this would run into a problem if the NSOutputStream does not have space available at that time.
Then I thought about buffering the data as it comes back from the Audio Queue and sending some each time the NSOutputStream fires a space available event. But this would seem to have a problem if the sending to the server gets ahead of the audio recording then there will be a situation where there is nothing to write on the space available event, so the event will not be fired again and the data transfer will effectivly be stalled.
So what is the best way to handle this? Should I have a timer that fires repeatedly and see if there is space available and there is data that needs to be sent? Also, I think I will need to do some thread synchronization so that I can take chunks of data out of my buffer to send across the network but also add chunks of data to the buffer as the recording proceeds without risking mangling my buffer.
You could use a ring buffer to hold a certain number of audio frames and drop frames if the buffer exceeds a certain size. When your stream-has-space-available callback gets called, pull a frame off the ring buffer and send it.
CHDataStructures provides a few ring-buffer (which it calls “circular buffer”) classes.

iPhone: Sending large data with Game Kit

I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.