Bluetooth low energy : Discovery modes and connection mode, Independent or dependent? - testing

In GAP test spec (4.1.0) there is a test case (TP/DISC/NONM/BV-02-C [Non-discoverable Mode Undirected Connectable Mode]).
Basically i need to put IUT in non-discovarable mode and non-connectable mode.
Let's see what core4.1 spec has to say:
Non-Discovarable mode:
1)Shall not set LE GENERAL and LE LIMITED Flags in ADV data.
2)A Peripheral device in the non-connectable mode may send non-connectable
undirected advertising events or scannable undirected advertising events
or may not send advertising packets.
If the Peripheral device in the non-discoverable mode sends non-connectable
advertising events or scannable undirected advertising events then it is
recommended that the Host configures the Controller as follows:
• The Host should set the advertising filter policy to either ‘process scan and
connection requests only from devices in the White List’ or ‘process scan
and connection requests from all devices’.
Undirected-Connectable mode:
The Host shall configure the Controller to send undirected connectable advertising
events.
Type of advertisement is contradictory. So what should i do for this particular test case?

Just read a book on BLE. It seems Discovery mode has nothing to do with type of advertisement. Discovery mode only and only depends on the flag in advertisement data. Connection mode depends on type of advertisement.
I am not marking this as correct answer. Would like feedback from someone experienced in BLE dev/test.
Update:
Discoverable modes just define the flags in adv packet. They do not dictate any type of advertisement. Any type of advertisement which can carry advertisement data payload can be used in any discoverable mode.
Now when you advertise it has be in one of the connection mode. Connection mode defines type of advertisement and discover mode defines the flag in the advertisement data.
For example:
Peripheral = (No flag + connectable undirected mode ) and Central = (General or limited discovery procedure) then this device will not be seen by application on top of GAP central.

Related

iot edge best practise

We have around 9000 devices in field.
This devices are at groups of 1-100 at customers on prem.
The devices are not capable of azure-iot-sdk integration.
The devices have a webservice API.
The devices should appear as first-class devices in azure.
We like the iot edge module provisiong feature.
We want to evaluate if modules could gather data from the devices and send them to IoTHub for further processing.
We found this feature overview of IoTEdge: https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
Pattern Transparent and Protocol translation are out of scope due to above facts. Pattern Identity translation seems to fit.
We want a 1 to 1 relationship between module and real device.
Therefor we assume the following POC with the hope of clarification and best practise:
we implement a iot edge module (azure-iot-sdk-java)
we open module connection to iotedge and suscribe to desired properties
the module identity gets as desired property the ip of the real device and the azure device identitiy connection string.
we open device connection to iotedge by adding GatewayHostName to the device connection string as described here https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
we request data from the real device and send them via azure device identity.
This somewho mixes up two patterns and seems kind of odd to us.
Can you point out best practises and risks with this approach?
Yes, I agree with that Pattern Identity translation could fit your scenario.
There are three patterns for using an IoT Edge device as a gateway: transparent, protocol translation, and identity translation, you can refer to this link to get more introduction about these three pattern.

Do USB Control Transfers guarantee delivery?

USB 2.0 specifies 4 types of transfers (in section 5.4 Transfer Types):
Control Transfers
Isochronous Transfers
Interrupt Transfers
Bulk Transfers
Section 5.8 says that Bulk Transfers provide:
Access to the USB on a bandwidth-available basis
Retry of transfers, in the case of occasional delivery failure due to errors on the bus
Guaranteed delivery of data but no guarantee of bandwidth or latency
(Emphasis mine.)
I don't see a similar statement for Control Transfers. Do they also guarantee delivery? If not, how are users expected to handle failures?
Please provide a citation(s) to support your answer.
The USB specification provides robust error detection and recovery for control transfers. The control transfer will either be completed or the USB host will know that it failed, and I think that's what "guaranteed delivery" is supposed to mean. This is important because control transfers are used to set up the device when you plug it into a computer and they are also used for many important purposes by the various USB device classes (e.g. they are used to set the baud rate of a serial port on a USB CDC ACM device).
From section 5.5.5 of the USB 2.0 specification:
The USB provides robust error detection and recovery/retransmission for errors that occur during control transfers. Transmitters and receivers can remain synchronized with regard to where they are in a control transfer and recover with minimum effort. Retransmission of Data and Status packets can be detected by a receiver via data retry indicators in the packet. A transmitter can reliably determine that its corresponding receiver has successfully accepted a transmitted packet by information returned in a handshake to the packet. The protocol allows for distinguishing a retransmitted packet from its original packet except for a control Setup packet. Setup packets may be retransmitted due to a transmission error; however, Setup packets cannot indicate that a packet is an original or a retried transmission.
The only transfer type without guaranteed delivery is isochronous. Also, the start of frame (SOF) packets don't have guaranteed delivery.

What is the use of multiple control endpoints (non-EP0)?

I learned on OSDev wiki that Endpoint 0 is the default control pipe, allowing for bi-directional control transfers. This is used for device configuration, e.g. to retrieve device descriptors. The USB 2.0 spec explains this more thorougly in section 5.5 Control Transfers.
There are also a limited amount of endpoints available (2 for low-speed, 15 for full- and high-speed devices). Somewhere in the USB 2.0 spec, I have read that there must be at least one control pipe. This implies that there may be multiple control endpoints, but what is the use of it? Do you know any particular USB device or class that has an EP configured as control pipe?
Later, I found this in the spec, section 10.1.2 Control Mechanisms:
A particular USB device may allow the use of additional message pipes
to transfer device-specific control information. These pipes use the
same communications protocol as the default pipe, but the information
transferred is specific to the USB device and is not standardized by
the USB Specification.
If I understand it correctly, this means that non-EP0 cannot be used to configure the device (say, a standard request such as GET_DESCRIPTOR). But the setup/data/status stages seem still to be available ("[..] use the same communications protocol [..]"). Is this correct? Or is the use of standard/class requests forbidden for non-EP0?
Background: while working on an emulated USB device in QEMU, the need for a USB monitor for debugging purposes appeared. During inspection of the QEMU core USB code, I noticed that it only processed control commands for EP0. Other endpoints would be treated as data. There are some virtual devices (host-libusb) that always reject control transfers for those other endpoints. Hence the question whether this is the correct behavior or not (and if valid, whether there exist devices that really implement this).
As far as I can tell, there is no use for a non-EP0 control endpoint. I have developed several products that use custom control transfers on endpoint 0 as the main way to send device-specific requests and I have not encountered any fundamental problems with doing that.
If you did make a non-EP0 control endpoint I think your understanding is correct; you wouldn't be able to use it for standard requests but you would be able to use it for custom requests and the transaction sequences would be the same as on EP0.

UDP broadcast/multicast vs unicast behaviour (dropped packets)

I have an embedded device (source) which is sending out a stream of (audio) data in chunks of 20 ms (= about 330 bytes) by means of a UDP packets. The network volume is thus fairly low at about 16kBps (practically somewhat more due to UDP/IP overhead). The device is running the lwIP stack (v1.3.2) and connects to a WiFi network using a WiFi solution from H&D Wireless (HDG104, WiFi G-mode). The destination (sink) is a Windows Vista PC which is also connected to the WiFi network using a USB WiFi dongle (WiFi G-mode). A program is running on the PC which allows me to monitor the amount of dropped packets. I am also running Wireshark to analyze the network traffic directly. No other clients are actively sending data over the network at this point.
When I send the data using broadcast or multicast many packets are dropped, sometimes upto 15%. However, when I switch to using UDP unicast, the amount of packets dropped is negligible (< 2%).
Using UDP I expect packets to be dropped (which is OK in my Audio application), but why do I see such a big difference in performance between Broadcast/Multicast and unicast?
My router is a WRT54GS (FW v7.50.2) and the PC (sink) is using a trendnet TEW-648UB network adapter, running in WiFi G-mode.
This looks like it is a well known WiFi issue:
Quoted from http://www.wi-fiplanet.com/tutorials/article.php/3433451
The 802.11 (Wi-Fi) standards specify support for multicasting as part of asynchronous services. An 802.11 client station, such as a wireless laptop or PDA (not an access point), begins a multicast delivery by sending multicast packets in 802.11 unicast data frames directed to only the access point. The access point responds with an 802.11 acknowledgement frame sent to the source station if no errors are found in the data frame.
If the client sending the frame doesnt receive an acknowledgement, then the client will retransmit the frame. With multicasting, the leg of the data path from the wireless client to the access point includes transmission error recovery. The 802.11 protocols ensure reliability between stations in both infrastructure and ad hoc configurations when using unicast data frame transmissions.
After receiving the unicast data frame from the client, the access point transmits the data (that the originating client wants to multicast) as a multicast frame, which contains a group address as the destination for the intended recipients. Each of the destination stations can receive the frame; however, they do not respond with acknowledgements. As a result, multicasting doesnt ensure a complete, reliable flow of data.
The lack of acknowledgments with multicasting means that some of the data your application is sending may not make it to all of the destinations, and theres no indication of a successful reception. This may be okay, though, for some applications, especially ones where its okay to have gaps in data. For instance, the continual streaming of telemetry from a control valve monitor can likely miss status updates from time-to-time.
This article has more information:
http://hal.archives-ouvertes.fr/docs/00/08/44/57/PDF/RR-5947.pdf
One very interesting side-effect of the multicast implementation (at the WiFi MAC layer) is that as long as your receivers are wired, you will not experience any issues (due to the acknowledgement on the receiver side, which is really a unicast connection). However, with WiFi receivers (as in my case), packet loss is enormous and completely unacceptable for audio.
Multicast does not have ack packets and so there is no retransmission of lost packets. This makes perfect sense as there are many receivers and it's not like they can all reply at the same time (the air is shared like coax Ethernet). If they were all to send acks in sequence using some backoff scheme it would eat all your bandwidth.
UDP streaming with packet loss is a well known challenge and is usually solved using some type of forward error correction. Recently a class known as fountain codes, such as Raptor-Q, shows promise for packet loss problem in particular when there are several unreliable sources for the data at the same time. (example: multiple wifi access points covering an area)

USB - MTP/PTP without Interrupt Endpoint

Since we plan to use MTP (Media Transfer Protocol) for your next device, we evaluate the use of MTP as replacement for the current (unstable) USB drivers in the current released device.
The limitation on this device is, that its processor (Strong Arm) supports only up to 3 EndPoints:
"Serial port 0 is a universal serial bus device controller (UDC) that supports three endpoints and can operate half-duplex at a baud rate of 12 Mbps (slave only, not a host or hub controller)."
But according to the specification, MTP needs at least 4 endpoints (from the PTP spec):
"The device shall contain at least four endpoints: default, Data-In, Data-Out, and an Interrupt endpoint."
Now the question: Can we just skip the interrupt endpoint on the device? I know that it violates the specification - but what happens if we do?
From our current evaluation software I can see the following scenarios:
The 'space available' is not updated - the user will see that there is 100Mb of free memory, but placing a 1Mb file gives the error "Not Enough Memory"
Non-host driven actions are not visible on the host (so when on the device files are deleted, created or moved, the connected host does not know about it)
If we can live with it, is it advisable to implement it this way?
UPDATE: Damn... when I tested it last time, I ve just removed the code for interrupt-EP data transmission. Now I also removed the endpoint definition (I do not create the endpoint anymore) and from this point the MTP connection couldn't be established any more :(
It seems that the windows driver (wpd) requires the interrupt endpoint - even if it's not used. Bad luck...
Has anyone an idea, whether and how to get MTP working with 3 endpoints?
Finally I got an answer from Microsoft:
The 3-endpoints setup is not supported.
The interrupt endpoint is required so that the driver can receive MTP events from the device. These events are a notification mechanism that the driver relies on to relay events to applications (e.g. when an object is created, updated, or removed).
If your device does nothing with the endpoint (i.e. send no events), applications such as Explorer will not behave correctly whenever objects on your device are changed.
So we buried our plans... :(