Why does Webrtc RTX need different payload type - webrtc

upon viewing the webrtc RTX rfc
The payload type is dynamic. If multiple payload types using retransmission are present in the original stream, then for each of these, a dynamic payload type MUST be mapped to the retransmission payload format. See Section 8.1 for the specification of how the mapping between original and retransmission payload types is done with Session Description Protocol (SDP).
but why? if retransmission use a different ssrc, using same payload type as media payload type would not affect the recovering process,right?
Just want to know the design reason behind this approch!

While you could theoretically use the same payload type and demultiplex by SSRC instead of payload type, SDP can not properly express this as you would end up with two rtpmap lines with the same paylaod type but different codec names.
(one of the downsides of that requirement is that you need one RTX payload type per media type instead of just a single RTX payload type)

Related

Does vp9 handle packet loss or I have to handle manually?

I am capturing my screen in real-time and encoding them using the vp9 codec (using JNI). encoded frames are I-frame or P-frame. then I divide them into chunks (sub-frames) and send them to the network. But at receiving end there has been some natural packet loss and even a single miss of sub-frames causing the inability to reconstruct corresponding I/P - frames. I tried to simulate (randomly throwing out some sub-frames) the same thing locally and the same things happened. Doesn't VP9 codec has some built-in packet loss handling? If so, how to enable them and how can it perform well to a certain percentage?
And if there is no built-in packet loss handling Do I have to implement FIR or FEC manually? and where to follow?
Thanks in advance.
Common way to send video stream is RTP protocol based on UDP, among other libs WebRTC also uses this transport under hood. Each encoded frame before sending is packetized, i.e. splitted to one or several RTP packets. In this context term "packet loss" means RTP packet loss. These losses are handled by sender peer using RTCP Receiver Reports from the other peer: the sender can retransmit lost packets. So, such reconstruction is not related to VP9 or any other specific codec.
As vp9 is a entropy coding, even a single packet missing causes inability to reconstruct the I/P frame. Even inability to reconstruct I frame cause inability to construct all subsequent & dependent P frame's. As I am using raw vp9 I have to implement any kind of retransmission or redundancy.
There is a concept of error_resilient packet or golden frame which can be called as budget version of I-frame which I need to send at certain interval from sender so that the I frame & subsequent P
frame's will have some resiliency (I tried it and failed generating golden frame from encoder by enabling the parameter, maybe I will have to generate it myself).

Message versioning in RabbitMQ / AMQP?

What is the recommended way to deal with message versioning? The main schools of thought appear to be:
Always create a new message class as the message structure changes
Never use (pure) serialized objects as a message. Always use some kind of version header field and a byte stream body field. In this way, the receiver can always accept the message and check the version number before attempting to read the message body.
Never use binary serialized objects as a message. Instead, use a textual form such as JSON. In this way, the receiver can always accept the message, check the version number, and then (when possible) understand the message body.
As I want to keep my messages compact I am considering using Google Protocol Buffers which would allow me to satisfy both 2 & 3.
However I am interested in real world experiences and advice on how to handle versioning of messages as their structure changes?
In this case "version" will be basically some metadata about the message, And these metadata are some instruction/hints to the processing algorithm. So I willsuggest to add such metadata in the header (outside of the payload), so that consumer can read the metadata first before trying to read/understand and process the message payload. For example, if you keep the version info in the payload and due to some reason your (message payload is corrupted) then algorithm will fail parse the message, then it can not event reach the metadata you have put there.
You may consider to have both version and type info of the payload in one header.

How to understand Bulk transfer using libusb

Say I have an USB device, a camera for instance, and I would like to load the image sequence captured by the camera to the host, using libusb API.
It is not clear to me for the following points:
How is the IN Endpoint on the device populated? Is it always the full image data of one frame (and optionally plus some status data)?
libusb_bulk_transfer() has a parameter length to specify how long is the data the host wants to read IN, and another parameter transferred indicating how much data actually had been transferred. The question is: should I always request the same amount of data that the IN Endpoint would send? If so, then what would be the case where transferred was smaller than length?
How is it determined how much data would be sent by the In Endpoint upon each transfer request?

usb hid: why should i write "null" to the control pipe in the out endpoint interrupt

Digging around with/for HID reports, I ran into a strange problem within a USB HID device. I'm implementing an HID class device and have based my program on the HID USB program supplied by Keil. Some codes have been changed in this project and it seems working fine with 32 bytes input and 32 bytes output reports. Somehow, after thousands times data transferring, the Endpoint 1 out would hang and become a bad pipe. Then I searched the google for some tips, a topic remind me that we should write a data length zero packet after sending a length of packet match what you defined in the report description. But it's not working for me. Then I write a data length zero to the control pipe after I receive a out packet and magically, it works! It would never hang after million times transferring!
Here is my question: Why does it works after writing a data length zero to a control pipe. The data transferring in the out pipe should have no relationship with the data in the control pipe. It confuses me!
If you transfer any data that is less than the expected payload size, you must send a Zero Length Packet to indicate that data has transferred.
But it depends heavily on the implementation on the host controller, and not all devices follow the specification to the point and may stall.
Source:
When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

NServiceBus Specify BinarySerializer for certain message types but not for all

Does NServiceBus 2.0 allow for defining serializer for given message type?
I want for all but one of my messaages to be serialized using XmlSerializer. The remaining one should be serialized using BinarySerializer.
Is it possible with NServiceBus 2.0?
I believe the serializer is specified on an endpoint basis, so all messages using that endpoint would use the same serializer.
However, if you follow the rote NServiceBus recommendation of one message type per endpoint/queue then you could effectively isolate one message type and use a different serializer for it.
I'm curious, however, what is special about the one message type that requires binary serialization?
Edit in response to comment
The Distributor info indirectly mentions this under Routing with the Distributor. Udi Dahan also frequently advises this in the NServiceBus Yahoo Group although it's difficult to provide links because the search there is poor.
Basically, the idea is that you wouldn't want high priority messages to get stuck behind lower-priority ones, and also that this provides you with the greatest flexibility to scale out certain message processing if necessary.
Because the MsmqTransportConfig only allows for one InputQueue to be specified, having one message type per queue also means that you only have one message handler per endpoint.
To address the image, you may still be able to encapsulate it in an XML-formatted message if you encode the byte array as a Base64-encoded string. It's not ideal, but if your images aren't too large, it may be easier to do this than to go to the trouble of using a different serializer on only one message type.
Another option is to store the image data out-of-band in a database or filesystem and then refer to it by an ID or path (respectively).
Not possible in Version 2. But it can be done using the pipeline in versions 5 and above http://docs.particular.net/samples/pipeline/multi-serializer/