how to calculate Delay in RTP packets using RTP time and NTP time from RTCP - webrtc

I am sending a video stream from the browser to gstreamer using webrtc. I can get the RTP time of the packets and NTP time from RTCP SR packets in gstreamer. At receiver At receiver I want to calculate the time elapsed since that packet was created at the sender.
i am currently calculating delay as
delay = sent time of packet - receive time of the packet
(all clients do not have same NTP time)
time difference =( NTP of receiver - Ntp of sender )
i am converting RTP time of every RTP packet to NTP values
90000 is the clock rate
send time of packet =(RTP time in RTP packet) - RTP Time in RTCP packet) )/90000 ) * 1000000000 + NTP in RTCP (converting value to ns)
delay = ( receiver NTP in ns - time differences ) - sent time of packet in ns (in NTP values)
the delay shows around 12046392 nano sec in local network i don't think its correct am i doing something wrong in the calculation.

RTCP SR can help mapping RTP (relative based on sample clock) timestamps to NTP (global absolute - wall clock) timestamps.
Unsure what exactly is meant by "At sender ... calculate the time elapsed since that packet was created at the sender."
If you are interested in calculating the time difference in seconds between two of those RTP timestamps you need to know the media/sample clock rate generating them. You can derive this clock rate by correlating with RTCP SR NTP timestamps but you need at least two of them.
If you want to know how much time elapsed since sender created the RTCP SR just calculate the difference between current time and the received NTP timestamp. The RTCP SR NTP timestamp is literally the current time when the packet was created.
The RTP timestamp attached to the NTP one inside RTCP SR packet is the interpolated current media/sample clock time. By this relation one can compute the absolute wall time for every RTP timestamp but not without knowing at least the media clock rate (e.b. by deriving it from to RTCP SR packets or by reading it from some media header as the player obviously needs to know media clock rates for correct playback speed).
TL;DR:
Can't compute because crucial info is missing.
Edit:
Your calculcations look good to me. I assume one packet of RTP data contains multiple media samples. The RTP timestamp references the time for the first byte of the first sample in a packet. So the time when a packet left the sender should be close to RTP_to_NTP_timestamp_in_seconds + ( number_of_samples_in_packet / clock ). I guess the media data in one packet could add up to multiple milliseconds. Maybe number_of_samples_in_packet / clock is about 12 milliseconds? I guess the time between two RTP packets is then also close to 12 ms.
Edit2:
From RFC4587: If a video image occupies more than one packet, the timestamp SHALL be the same on all of those packets.
Which could also affect your calculation. But should be minor, I guess.

Related

How to send 2MB of data through UDP?

I am using TMS570LS3137 (DP84640 Phy). Trying to program UPD(unicast) using lwip to send 2MB of data.
As of now i can send upto 63kb of data. How to send 2MB of data at a time. UDP support upto 63kb of transmission only, but in this link
https://stackoverflow.com/questions/32512345/how-to-send-udp-packets-of-size-greater-than-64-kb#:~:text=So%20it's%20not%20possible%20to,it%20up%20into%20multiple%20datagrams.
They have mentioned as "If you need to send larger messages, you need to break it up into multiple datagrams.", how to proceed with this?
Since UDP uses IP, you're limited to the maximum IP packet size of 64 KiB generally, even with fragmentation. So, the hard limit for any UDP payload is 65,535 - 28 = 65,507 bytes.
I need to either
chunk your data into multiple datagrams. Since datagrams may arrive out of sending order or even get lost, this requires some kind of protocol or header. That could be as simple as four bytes at the beginning to define the buffer offset the data goes to, or a datagram sequence number. While you're at it, you won't want to rely on fragmentation but, depending on the scenario, use either the maximum UDP payload size over plain Ethernet (1500 bytes MTU - 20 bytes IP header - 8 bytes UDP header = 1472 bytes), or a sane maximum that should work all the time (e.g. 1432 bytes).
use TCP which can transport arbitrarily sized data and does all the work for you.

Which meaning of "live" is used in TTL (Time To Live)?

https://en.wikipedia.org/wiki/Time_to_live
Live as in: The show will go live on air this evening.
or Live as in: I want to live in Paris.
For years I thought it was the first definition, but it just occurred to me that it makes more sense as the second.
It's the second. It's the amount of time that said packet has left to live, or alternatively the amount of time left until it dies, as opposed to the amount of time until it goes live.
For IP & DNS its the second definition. For example, for IP it indicates the amount of hops left that the packet can live before it will die. Each "hop" will reduce the TTL by 1 until it reaches 0(and dies) or its destination.

Is using implied data in a message to compute a CRC a good design strategy?

We are sending UDP messages from one device to another. There is a timestamp in the message and transmitted in a 16 bit field. The receiver keeps track of the number of times the field "rolls over" so that time spans that require more than 16 bits can be tracked. The protocol designer has decided that we should use the entire 32 bit timestamp to compute the CRC for the message. Is this a good idea? Note that we have a message period that is much smaller than the "roll over" period.
Since you are apparently in control of the messages, you should just transmit the 32-bit timestamp in the message.
What is the size of the CRC? If it is a 16-bit CRC, you could forgo the error detection function completely and solve the equations to get the missing 16-bits of the timestamp from the transmitted message and the CRC. But if you're going to do that, why not just send the other 16-bits of the timestamp directly in the CRC field, instead of a CRC?
If it is a 32-bit CRC, you could again solve, and be left with 16-bits of "strength" in the error detection, instead of 32. Again, one would have to ask why you wouldn't just send the other 16-bits of the timestamp and put a 16-bit CRC in what remains.
Or if you can change the length of the message, just add the missing 16 bits of timestamp, leaving the CRC and its error-detection capability intact.

Maximum transfer rate isochronous 128B endpoint full speed

In the usb specification (Table 5-4) is stated that given an isochronous endpoint with a maxPacketSize of 128 Bytes as much as 10 transactions can be done per frame. This gives 128 * 10 * 1000 = 1.28 MB/s of theorical bandwidth.
At the same time it states
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Isn't it contradictory with the aforementioned table ?
I've done some tests and found that only 1 transaction is done per frame on my device. Also, I found on several web sites that just 1 transaction can be done per frame(ms). Of course I suppose the spec is the correct reference, so my question is, what could be the cause of receiving only 1 packet per frame ? Am I misunderstanding the spec and what i think are transactions are actually another thing ?
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Assuming USB Full Speed you could still have 10 isochronous 128 byte transactions per frame by using 10 different endpoints.
The Table 5-4 seems to miss calculations for chapter 5.6.4 "Isochronous Transfer Bus Access Constraints". The 90% rule reduces the max number of 128 byte isochr. transactions to nine.

Fastest rate at which CANopen PDOs are received

Assuming the highest baud rate, what is the highest rate at which PDOs are received?
That depends on the length of the PDO and the number of PDOs per message. The ratio between transported data and protocol overhead is best when you use the full eight bytes of one CAN message.
If you want high troughput, use all eight bytes of one message
If you want the highest possible frequency use as few data bits as possible
A rule of thumb:
Eight bytes of a payload result in a CAN message of about 100 bit length.
With 1 Mbit/s maximum baud rate you can achieve about 10000 messages per second.