Method call using Serial port(Boost asio) - boost-asio

I am a beginner with Serial port and want to get more ideas for it. I would like to know, if it is possible that, the device at one side of Serial Connecting makes and send a request, for instance add(a, b), the other device at the other side of Serial Connecting receives the request, processing it (calculate it) and later return the result to the first device. We assume that in the second device exist a program with the function add(int a, int b).
Thanks and best regards,
Chunya

Boost Asio with serial ports
You can send command identifier (unique ID known to both devices, the simplest example is command name) and its arguments serialized. Exact protocol and format is up to you.
So it could look like:
Device1 -> Device2: add(1, 2)
Device2 -> Device1: result(3)

Related

What is the difference between sdpMid and sdpMLineIndex in an RTCIceCandidate when doing trickle ice?

I was debugging a webrtc trickle ice exchange the other day and realized I never paid much attention to the candidate messages (generated by calling RTCIceCandidate.toJson() ) that look like this:
{"candidate":"candidate:394300051 1 tcp 1518214911 192.168.1.12 9 typ host tcptype active generation 0 ufrag rfBJ network-id 1",
"sdpMid":"0","sdpMLineIndex":0}
In the above json message exactly what do sdpMid and sdpMLineIndex represent? They always appear to have the same values (either 0/"0" or 1/"1").
Is it correct to say:
That sdpMid corresponds to the a=mid line for a stream in the intial SDP? That is, if the line for the audio stream was declared as a=mid:audio, then the candidate's sdpMid value would have been "audio" as well.
That sdpMLineIndex is the index number of the stream as it appeared in the SDP? That is if audio was first in the SDP, then this value is 0 and if video was second it would be 1?
In other words, sdpMid is a string name for the stream and sdpMLineIndex is an index value. But the standard convention used by most implementations is to just have these values be the same.
Is this correct?
sdpMid and sdpMLineIndex are equivalent for the currently browser-generated offers for simple cases. They are not equivalent for cases like stopping a transceiver (using .stop()) and then generating a new offer. This new offer usually has a new mid which can be an incrementally generated number whereas the sdpMLineIndex may not increment if a previously unused m= line gets recycled.
Effectively they are artifacts from very early versions of the specifications and implementations lagging behind (here for Firefox).

How to send/receive variable length protocol messages independently on the transmission layer

I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.

Change a wireshark preference in dissector?

I'm creating a dissector for Wireshark in C, for a protocol on top of UDP. Since i'm using heuristic dissecting but another protocol with a standard dissector for the same port as mine exists, my packets are being dissected as that other protocol. For my dissector to work, I need to enable the "try heuristic dissectors first" UDP preference, but I wished to change that property when my plugin is registered (in the code), so the user does not need to change it manually.
I noticed on epan/prefs.h, the function prefs_set_pref exists! But when I used it on my plugin, Wireshark crashes on startup with a Bus Error 10.
Is what I want to do possible/correct?
So I've tried this:
G_MODULE_EXPORT void plugin_register(void){
prefs_set_pref("udp.try_heuristic_first:true");
// My proto_register goes here
}
Since epan/prefs.h has:
/*
* Given a string of the form "<pref name>:<pref value>", as might appear
* as an argument to a "-o" option, parse it and set the preference in
* question. Return an indication of whether it succeeded or failed
* in some fashion.
*
* XXX - should supply, for syntax errors, a detailed explanation of
* the syntax error.
*/
WS_DLL_PUBLIC prefs_set_pref_e prefs_set_pref(char *prefarg);
Thanks
Calling prefs_set_pref("udp.try_heuristic_first:true"); works for me in a test Wireshark plugin.
OK: Assuming no other issues,I expect the problem is that prefs_set_pref() modifies the string passed to it.
If (the address of) a string literal is passed, the code will attempt to modify the literal which, in general, is not allowed. I suspect this is the cause of your
Bus Error 10.
(I'd have to dig deeper to see why my test on Windows actually worked).
So: I suggest trying something like:
char foo[] = {"udp.try_heuristic_first:true"};
...
prefs_set_pref(foo);
to see if that works;
Or: do a strcpy of the literal to a local array.
==============
(Earlier original comments)
Some comments/questions:
What's the G_MODULE_EXPORT about ?
None of the existing Wireshark plugin dissectors use this.
(See any of the dissectors under plugins in your dissector Wireshark source tree).
The plugin register function needs to be named proto_register_??? where ???
is the name of you plugin dissector.
So: I don't understand the whole G_MODULE_EXPORT void plugin_register(void){ & etc
The call to prefs_set_prefs() should be in the proto_reg_handoff_???() function (and not in the proto_register_??? function).

LabView TCP connection

There are some examples in LabView of TCP/IP connection, but I don't really get what the VI is doing. What some functions are doing. Here are the pictures of the examples.
Image 1: The Server
Why is the wire splitted into two wires after the typecast function? And I dont really get what these other functions do that are marked.
Image 2: The Client
First, if you don't understand what functions do, learn to open the context help window (ctrl+H) and right click each function to get the specific help for it. This will tell you that the functions read and write to the TCP stream. There should also be some more TCP examples in the example finder, which should have more comments.
As for what's happening, LV represents the TCP byte stream as a string, so whoever wrote the example used the following convention - use type cast to convert to a string, then get the length of that string (an I32, so it's 4 bytes) and type cast that to a string as well and send it before the data.
On the receiving side, the code starts by reading the 4 bytes (because it's an I32) and type casting them back to an I32. This is the length of the rest of the data and it's fed into the second read which then returns the data which is type cast to the original type. This is done because TCP has no terminating character, so this is a convenient method of knowing how much data to read. You don't have to do it like this, but it's an option.

Bluetooth incoming data string distortion

I have scales equipped with RS232 serial port and a Bluetooth transmitter. I made a program in VBA to receive data from the scales. However, lets say out of 10 incoming strings I get 3 distorted. My regular strings look like: "+001500./3 G S". This means 1500.3 grams above zero and the output is stable. But sometimes I get strings like separated like "+" or "001500./3" or "G S". When I plug serial cable I have no distortions.
Serial ports are just byte streams. You can never make assumptions about how many of the bytes will show up on each read operation. It's only coincidence that when you use a real cable you read the whole string at once. You have to do the string splitting yourself, and continue reading when you only get a partial result.