How do receivers of broadcast messages keep track of control + payload - error-handling

When transmitting a message, you have payload + control data.
While control data is there to help a receiver filter the right data, payload is just there as a state in time.
Given that there are uncountable amounts of data being broadcasted from all kinds of devices, how does a receiver know at any time that this data is meant for it and not another device?
Example:
time1: control
time2: payload
or even
time1: control-part1
time2: control-part2
...
timen: payloadn
How does a receiver of control-part2 know about control-part1 and is able to assign payloadn to control-part1?
In addition to this, communication might be encrypted and even if not, it probably looks like this:
time1: control-part1
time1.5: payload x by some random other device
time2: control-part2
...
timen: payloadn
timen+1: control by some random other device
And if it's encrypted how does it know where the next bit of information is.
An application I think of are light-modulations by distance measurement devices. The device basically has to wait for a full sequence to come back. But there is a lot of light diffraction going on, interfering with the receptor. Why does it just work?

If the broadcast message is not encrypted the receiver just see all the broadcast messages. In the case of ARP, when a device want to know the MAC address of other device send a arp request (broadcast) and the target device respond because in the ARP packet there is a field to identify it.
On the other hand, in a case of broadcast encryption there is a system to distribute keys and then the receiver device that have the that can decrypt the message just decrypted and can access the information (Broadcast Encryption).
So, when is done a broadcast and is not encrypted the packet has some information to identify the target host and when the message is encrypted there are first a distribution of keys and then only the devices that have the keys can see the information decrypted.
I hope that this help!

Related

Cumulocity - managedObject Event - detect device first connection

Looking to understand whether there is a a bulletproof event from the namagedObject side of c8y where we know the device has just connected.
I have a microservice that listens for events in real time and I want to trigger a process once we know a device has connected to send its payload.
We have used:
"c8y_Connection": {"status":"CONNECTED"}
We have had the microservice log to Slack all events from managedObjects where we saw for three days the "status":"CONNECTED" value in the payload of our demo devices at reporting times.
But after three days, we see no more this "CONNECTED" state (all payloads showing "DISCONNECTED").
What I am trying to achieve from the inventoryObject event is to understand when a device had connected and sent payload to know when data had arrived. I then go get the data and process it externally. This is post registration and as part of the daily data send cycle for my type of device.
What would be the best way to understand when a device has sent payload in a microservice? I want to notify an external application with either “data is arriving for id 35213” or even better, “data has arrived for device 35213, and here’s the $payload”.
Just as a general information ahead:
The c8y_Connection fragment showing connected shows an active MQTT connection or an active long polling connection and it is only evaluated once every minute.
So if the client is just sending data and immediately disconnecting afterwards this might not picked up.
If you want to see the device having send something to Cumulocity maybe the c8y_Availability fragment is a better as it holds the timestamp when the device last send something.
{ "lastMessage": "2022-10-11T14:49:50.201+09:00", "status": "UNAVAILABLE"}
Also here the evaluation (or better the update to database) only happens every minute.
Both c8y_Availability and c8y_Connection however are only generated if the availability monitoring has been activated for the device (by defining a required interval for the device).
So if you have activated the availability monitoring and you see a "lastMessage" you can reliably say that the device has already send something to Cumulocity.

USB - doubts about protocol

I'm currently studying how USB works. I read, that there are transactions, which are build from smaller pieces - packets. I read about all kinds of packets.
I can't understand one thing. As the book says - every transaction consists of 3 packets: token, data and hanshake.
The way I understand my book is depicted in the schema below.
In my opinion:
I think the first transaction should contain only token IN and data packet, but no hanshake packet (handshake for what?).
I think, that response should only contain ACK hanshake packet (that the data is written properly to the device).
Please, help me understand it in a proper way.
Best regards,
Tom.
A transaction is a series of one or more packets.
A typical IN transaction with no data looks like this:
The host sends an IN token.
The device sends a NAK handshake packet, which means it doesn't have any data to send.
A typical IN transaction with data looks like this:
The host sends an IN token.
The device sends a DATA0 or DATA1 packet with data.
The host sends an ACK handshake.
A typical OUT transaction looks like this:
The host sends an OUT token.
The host sends a DATA0 or DATA1 packet with data.
The device sends a NAK or ACK handshake depending on whether it accepted the data.
Note that I am just talking about full-speed (12 Mbps) USB 2.0 devices, and things can get a bit more complicated for the higher-speed devices.
Note that any of these packets could be lost due to noise issues. The USB specification specifically accounts for this, ensuring that packet loss doesn't result in incorrect operation of the device or host.

XBee Arduino API Remote At Command Response

I'm in trouble with programming my Arduino. I've two XBee Series 2 Modules and an Arduino UNO. I use the XBee-API library from: http://code.google.com/p/xbee-api/.
I generate three RemoteATRequest Packets (0x17) to control a Digital Pin of the Remote Sleepy Node and send it out of a SoftwareSerial to the XBee Coordinator which is plugged via a Sparkfun XBee Arduino Shield (https://www.sparkfun.com/products/10854) to the Arduino UNO. The Communication works fine. Every Request Packet is sending out to the Remote. And for every Request Packet a Remote Packet is received. I checked this with a Serial Monitor and a RS232<-> TTL Converter. But in my Arduino Software it seems to be that only one Remote Packet is received. Curious is the point that when I send the Request Packets in the time the Remote is sleeping than I read three Response if it is awake and takes the Requests from the Coordinator.
Does anyone try the same or hase the same problems? I've tried so much until know another Baudrate, delays befor sending out. Nothing works.
My recollection of ZigBee and/or 802.15.4 is that the parent node for a sleepy end device will only hold/queue a single frame for when the sleepy device wakes up. And note that in ZigBee it's only guaranteed to queue it for 7.5 seconds. You may need to modify your code to send a single Remote AT Request at a time, and wait for the response before sending another.
This page has a good description about how the MAC layer works:
Once the frame is assembled, there are actually two ways to send it.
If its going to another router or an end device whose receiver is
always on, the frame will be sent directly via the radio. Otherwise,
if the destination is a sleepy end device, the frame will need to be
sent as an indirect transfer. The frame will go to the indirect queue
until the destination device wakes up and polls the parent. Once the
poll comes in, the frame will get sent to the destination.
It would be great if the XBee module supported a frame type that contains multiple AT commands, but as far as I can tell, that isn't an option.

When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

I am writing code for a USB device. Suppose the USB host starts a control read transfer to read some data from the device, and the amount of data requested (wLength in the Setup Packet) is a multiple of the Endpoint 0 max packet size. Then after the host has received all the data (in the form of several IN transactions with maximum-sized data packets), will it initiate another IN transaction to see if there is more data even though there can't be more?
Here's an example sequence of events that I am wondering about:
USB enumeration process: max packet size on endpoint 0 is reported to be 64.
SETUP-DATA-ACK transaction starts a control read transfer, wLength = 128.
IN-DATA-ACK transaction delivers first 64 bytes of data to host.
IN-DATA-ACK transaction delivers last 64 bytes of data to host.
IN-DATA-ACK with zero-length DATA packet? Does this transaction ever happen?
OUT-DATA-ACK transaction completes Status Phase of the transfer; transfer is over.
I tested this on my computer (Windows Vista, if it matters) and the answer was no: the host was smart enough to know that no more data can be received from the device, even though all the packets sent by the device were full (maximum size allowed on Endpoint 0). I'm wondering if there are any hosts that are not smart enough, and will try to perform another IN transaction and expect to receive a zero-length data packet.
I think I read the relevant parts of the USB 2.0 and USB 3.0 specifications from usb.org but I did not find this issue addressed. I would appreciate it if someone can point me to the right section in either of those documents.
I know that a zero-length packet can be necessary if the device chooses to send less data than the host requested in wLength.
I know that I could make my code flexible enough to handle either case, but I'm hoping I don't have to.
Thanks to anyone who can answer this question!
Read carefully USB specification:
The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of
the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
So, in your case, when wLength == transfer size, answer is NO, you don't need ZLP.
In case wLength > transfer size, and (transfer size % ep0 size) == 0 answer is YES, you need ZLP.
In general, USB uses a less-than-max-length packet to demarcate an end-of-transfer. So in the case of a transfer which is an integer multiple of max-packet-length, a ZLP is used for demarcation.
You see this in bulk pipes a lot. For example, if you have a 4096 byte transfer, that will be broken down into an integer number of max-length packets plus one zero-length-packet. If the SW driver has a big enough receive buffer set up, higher-level SW receives the entire transfer at once, when the ZLP occurs.
Control transfers are a special case because they have the wLength field, so ZLP isn't strictly necessary.
But I'd strongly suggest SW be flexible to both, as you may see variations with different USB host silicon or low-level HCD drivers.
I would like to expand on MBR's answer. The USB specification 2.0, in section 5.5.3, says:
The Data stage of a control transfer from an endpoint to the host is
complete when the endpoint does one of the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
When a Data stage is complete, the Host Controller advances to the
Status stage instead of continuing on with another data transaction.
If the Host Controller does not advance to the Status stage when the
Data stage is complete, the endpoint halts the pipe as was outlined in
Section 5.3.2. If a larger-than-expected data payload is received from
the endpoint, the IRP for the control transfer will be
aborted/retired.
I added emphasis to one of the sentences in that quote because it seems to specifically say what the device should do: it should "halt" the pipe if the host tries to continue the data phase after it was done, and it is done if all the requested data has been transmitted (i.e. the number of bytes transferred is greater than or equal to wLength). I think halting refers to sending a STALL packet.
In other words, the device does not need a zero-length packet in this situation and in fact the USB specification says it should not provide one.
You don't have to. (*)
The whole point of wLength is to tell the host the maximum number of bytes it should attempt to read (but it might read less !)
(*) I have seen devices that crash when IN/OUT requests were made at incorrect time during control transfers (when debugging our host solution). So any host doing what you are worried about, would of killed those devices and is hopefully not in the market.

iPhone: Sending large data with Game Kit

I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.