What is the difference between sdpMid and sdpMLineIndex in an RTCIceCandidate when doing trickle ice? - webrtc

I was debugging a webrtc trickle ice exchange the other day and realized I never paid much attention to the candidate messages (generated by calling RTCIceCandidate.toJson() ) that look like this:
{"candidate":"candidate:394300051 1 tcp 1518214911 192.168.1.12 9 typ host tcptype active generation 0 ufrag rfBJ network-id 1",
"sdpMid":"0","sdpMLineIndex":0}
In the above json message exactly what do sdpMid and sdpMLineIndex represent? They always appear to have the same values (either 0/"0" or 1/"1").
Is it correct to say:
That sdpMid corresponds to the a=mid line for a stream in the intial SDP? That is, if the line for the audio stream was declared as a=mid:audio, then the candidate's sdpMid value would have been "audio" as well.
That sdpMLineIndex is the index number of the stream as it appeared in the SDP? That is if audio was first in the SDP, then this value is 0 and if video was second it would be 1?
In other words, sdpMid is a string name for the stream and sdpMLineIndex is an index value. But the standard convention used by most implementations is to just have these values be the same.
Is this correct?

sdpMid and sdpMLineIndex are equivalent for the currently browser-generated offers for simple cases. They are not equivalent for cases like stopping a transceiver (using .stop()) and then generating a new offer. This new offer usually has a new mid which can be an incrementally generated number whereas the sdpMLineIndex may not increment if a previously unused m= line gets recycled.
Effectively they are artifacts from very early versions of the specifications and implementations lagging behind (here for Firefox).

Related

Accessing a combination of ports by adding both their offsets to a base address. How would this work?

Context: I am following an embedded systems course https://www.edx.org/course/embedded-systems-shape-the-world-microcontroller-i
In the lecture on bit specific addressing they present the following example on a "peanut butter and jelly port".
Given you a port PB which has a base address of 0x40005000 and you wanted to access both port 4 and port 6 from PB which would be PB6 and PB4 respectively. One could add the offset of port 4(0x40) and port 6(0x100) to the base address(0x40005000) and define that as their new address 0x40005140.
Here is where I am confused. If I wanted to define the address for PB6 it would be base(0x40005000) + offset(0x100) = 0x40005100 and the address for PB4 would be base(0x40005000) + offset(0x40) = 0x40005040. So how is it that to access both of them I could use base(0x40005000) + offset(0x40) + offset(0x100) = 0x40005140? Is this is not an entirely different location in memory for them individually?
Also why is bit 0 represented as 0x004. In binary that would be 0000 0100. I suppose it would represent bit 0 if you disregard the first two binary bits but why are we disregarding them?
Lecture notes on bit specific addressing:
Your interpretation of how memory-mapped registers are addressed is quite reasonable for any normal peripheral on an ARM based microcontroller.
However, if you read the GPIODATA register definition on page 662 of the TM4C123GH6PM datasheet then you will see that this "register" behaves very differently.
They map a huge block of the address space (1024 bytes) to a single 32-bit register. This means that bits[9:2] of the the address bus are not needed, and are in fact overloaded with data. They contain the mask of the bits to be updated. This is what the "offset" calculation you have copied is trying to describe.
Personally I think this hardware interface could be a very clever way to let you set only some of the outputs within a bank using a single atomic write, but it makes this a very bad choice of device to use for teaching, because this isn't the way things normally work.

How to send/receive variable length protocol messages independently on the transmission layer

I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.

I2C Master-slave address

I am working on a project where I'm trying to implement I2C master-slave communication so as to read some data from a magnetic sensor. That's all OK and I have written the code. However, I am not quite sure about the slave address needed for the communication to actually happen. The board I'm using can hold STM32 ARM® Cortex™-M3 and Cortex™-M4 MCU's. I don't know if it matters, but the MCU I'm using is STM32F107VCT6.
The part of the code where I need to enter the address is in the following function marked as "SLAVE_ADDRESS_GOES_HERE":
uint8_t Magnet_readReg(const uint8_t regAdd)
{
uint8_t pom[1] = {0};
pom[0] = regAdd;
I2C1_Start();
I2C1_Write(SLAVE_ADDRESS_GOES_HERE, pom, 1, END_MODE_RESTART);
I2C1_Read(SLAVE_ADDRESS_GOES_HERE, pom, 1, END_MODE_STOP);
return pom[0];
}
The results should be some numbers which tell me how strong the magnetic field is. It has three different values as an output because it calculates a value for each of the three axes (yes, it's the correct plural of the word axis), so it can could be used as a compass for example.
Now the trick is that I don't get any results because I don't know the actual address of the sensor. Therefore, I will share the datasheet of the sensor I'm using. I am not sure if i'm reading it correctly.
Here is the datasheet:
https://www.memsic.com/userfiles/files/Datasheets/Magnetic-Sensors-Datasheets/MMC3416xPJ_Rev_C_2013_10_30.pdf
Solved.
As it turns out, there was something wrong with the board itself. Therefore, a connection couldn't be established. And the address is 60H for writing and 61H for reading. 30H is the address, but when you add a zero or a one in the LSB position you get 60H or 61H.
The I2C address of your sensor is described on page 4 of the datasheet you provided. You must read the marking on your device package, then use the table from "Number" to "Part number" in the datasheet to determine your exact part. Finally, use the table under the "Ordering Guide" to find the factory-programmed I2C slave address of your device.
Given that you later specified that your 7-bit I2C slave address is 0x30, then you must have part number MMC34160PJ, which should be marked:
0 •
XX

Telnet reader will split input after 1448 characters

I am writing a java applet that will print what a telnet client sends to the connection. Unfortunately, the client splits at 1448 characters.
The code that is proving to be a problem:
char[] l = new char[5000];
Reader r = new BufferedReader(new InputStreamReader(s.getInputStream(), "US-ASCII"));
int i = r.read(line);
I cannot change the source of what the telnet client reads from, so I am hoping it is an issue with the above three lines.
You're expecting to get telnet protocol data units from the TCP layer. It just doesn't work that way. You can only extract telnet protocol data units from the code that implements the telnet protocol. The segmentation of bytes of data at the TCP layer is arbitrary and it's the responsibility of higher layers to reconstruct the protocol data units.
The behavior you are seeing is normal, and unless you're diagnosing a performance issue, you should completely ignore the way the data is split at the TCP level.
The reason you're only getting 1448 bytes at a time is that the underlying protocols divide the transmission into packets. Frequently, this size is around 1500, and there are some bytes used for bookkeeping, so you're left with a chunk of 1448 bytes. The protocols don't guarantee that if you send X bytes in a 'single shot', that the client will receive X bytes in a single shot (e.g. a single call to the receive method).
As has been noted already in the comments above, its up to the receiving program to re-assemble these packets in a way that is meaningful to the client. In generally, you perform receives and append the data you receive to some buffer until you find an agreed-upon 'end of the block of data' marker (such as an end-of-line, new-line, carriage return, some symbol that won't appear in the data, etc.).
If the server is genuinely a telnet server--its output might be line-based (e.g. a single block of data is terminated with a 'end of line': carriage return and linefeed characters). RFC 854 may be helpful--it details the Telnet protocol as originally specified.

When is IPPROTO_UDP required?

When is IPPROTO_UDP required?
Is there ever a case where UDP is not the default protocol for SOCK_DGRAM? (real cases, not hypothetical "it might be", please")
i.e., what are the situations where the following two lines would not produce identical behavior?
if ((s=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP))==-1)
if ((s=socket(AF_INET, SOCK_DGRAM, 0))==-1)
Some operating systems (eg. Linux kernel after 2.6.20) support a second protocol for SOCK_DGRAM, called UDP-Lite. If supported by your system, it would be enabled by providing IPPROTO_UDPLITE as the third argument to the socket() call.
It is differentiated from normal UDP by allowing checksumming to be applied to only a portion of the datagram. (Normally, UDP checksumming is an all-or-nothing effort.) That way, the protocol can be more resistant to checksum failures due to fragmented transmission, in the event that some fragments outside the checksummed area may have been lost in transit. As long as the fragments covering the checksummed portion were successfully received, as much of the datagram as possible will still be delivered to the application.
For backwards compatibility with existing code, I suspect (but I cannot guarantee) that the call socket(AF_INET,SOCK_DGRAM,0) will continue to default to normal UDP, even in systems that additionally support UDP-Lite.
Given these declarations:
tcp_socket = socket(AF_INET, SOCK_STREAM, 0);
udp_socket = socket(AF_INET, SOCK_DGRAM, 0);
raw_socket = socket(AF_INET, SOCK_RAW, protocol);
the ip(7) manual page in linux says:
The only valid values for protocol are
0 and IPPROTO_TCP for TCP sockets, and
0 and IPPROTO_UDP for UDP sockets.
For SOCK_RAW you may specify a valid
IANA IP protocol defined in RFC 1700
assigned numbers.
Those two lines in your questions will always produce the same result.