USB Packet Length With Report ID - usb

I'm asking this question because the USB HID Documentation isn't very explicit about this. My question is in regards to Full Speed USB HID devices and their respective Report Descriptors. I have a device, with a Report ID of 2. The Report Count in the Report Descriptor is set to 64. Now, my current understanding, is that the Report Count is preceded by the Report ID when transferring a USB packet. Meaning...the size of the USB packet will be the size specified in the Report Count plus one byte for the Report ID, totaling a size of 65 bytes for the total transfer. I've tried this and it's working.
My question here is, is this a correct understanding of the USB spec, or am i exploiting something that could be patched later on by Windows updates or Mac updates, etc...?
According to the USB HID spec,a USB transaction is limited to 64 bytes for high speed devices. However, this is outdated information since high speed devices can reach 1024 bytes per transfer. Full speed devices are now specified to have 64 bytes maximum per transfer. It also specifies that the Report Count refers to the amount of data fields in a report transfer. It doesn't say USB transaction, just Report Transfer.
For Report ID's, the USB HID spec states, "Report ID items are used to indicate which data fields are represented in each report structure. A Report ID item tag assigns a 1-byte identification prefix to each report transfer."
This leads me to believe, that although it says that full speed devices are limited to 64-bytes per USB transaction, that limit does not take into account Report ID's. Is this correct?

No, the report ID counts as a data. With the report ID the remaining report data must not be longer than 63 bytes.
Note that this limit is only enforced by hardware in full speed mode. High speed interrupt endpoints can be up to 1024 bytes per transfer.
The current HID spec version 1.11 is from 2001, and thus predates USB 2.0 high speed quite a bit. Interrupt Transfers longer than 64 bytes where not available.
You may want to check the behavior of your device once it is connected to an old USB 1.1 (full speed) hub.

Related

Reservation Used Bytes parameter keeps changing

I created a dashboard in the Cloud Monitoring to monitor BI Engine metrics. I have a chart to measure the Reservation Used Bytes. The chart keeps changing values ranging from 30GB to 430MB, according to the chart. The time frame between days and weeks also does not change the measure chart. Why is the measuring changing throughout time to what appears to be from high to low and back to high? and, how can see how many bites have been utilized in total? Seems
You are using a metric that is coupled to current usage, so it is expected to vary over time with increasing or decreasing values.
https://cloud.google.com/bigquery/docs/bi-engine-monitor#metrics
Reservation Used Bytes: Total capacity used in one Google Cloud project
If you need the total bytes you need to switch to this metric:
Reservation Total Bytes Total capacity allocated to one Google Cloud project

Need burst speed messages per second for devices at various times during a day with Azure IoT hub

While Azure Event hub can have thousands and million? of messages per second, the Azure IoT hub has a surprisingly low limitation on this.
S1 has 12 msg/sec speed but allow 400.000 daily msg pr. unit
S2 has 120 msg/sec speed but allow 6.000.000 daily msg pr. unit
S3 has 6000 msg/sec speed but allow 300.000.000 daily msg pr unit.
Imagine an IoT solution where your devices normally sends 1 message every hour, but have the ability to activate a short "realtime" mode to send messages every second for about 2 minutes duration.
Example: 10.000 IoT devices:
Let's say 20% of these devices happens to start a realtime mode session simultaneously 4 times a day. (We do not have control over when those are started by individual customers). That is 2000 devices and burst speed needed is then 2000 msg/second.
Daily msg need:
Normal messages: 10.000dev * 24hours = 240.000 msg/day
Realtime messages daily count: 2.000dev * 120msg(2 min with 1 msg every second) * 4times a day = 960.000 messages
Total daily msg count need: 240.000 + 960000 msg = 1.200.000 msg/day.
Needed Azure IoT hub tier: S1 with 3 units gives 1.200.000 msg/day. ($25 * 3units = $75/month)
Burst speed needed:
2000 devices sending simultaneously every second for a couple of
minutes a few times a day: 2000 msg/second. Needed Azure IoT hub
tier: S2 with 17 units gives speed 2040 msg/second. ($250 * 17units =
$4250/month) Or go for S3 with 1 unit, which gives speed 6000
msg/second. ($2500/month)
The daily message count requires only a low IoT hub tier due to the modest messages per day count, but the need for burst speed when realtime is activated requires an unproportionally very high IoT hub tier which skyrockets(33 times) the monthly costs for the solution, ruining the businesscase.
Is it possible to allow for temporary burst speeds at varying times during a day as long as the total number of daily messages sent does not surpass current tier max limit?
I understood from an article from 2016 by Nicole Berdy that the throttling on Azure IoT hub is in place to avoid DDOS attacks and misuse. However to be able to simulate realtime mode functionality with Azure IoT hub we need more Event Hub like messages/second speed. Can this be opened up by contacting support or something? Will it help if the whole solution is living inside its own protected network bubble?
Thanks,
For real-time needs definitely, always consider Azure IoT Edge and double check if it is possible to implement it on your scenario.
In the calculations you did above you refer, for example that S2 has 120 msg/sec speed. That is not fully correct. Let me explain better:
The throttle for Device-to-cloud sends is applied only if you overpass 120 send operations/sec/unit
Each message can be up to 256 KB which is the maximum message size.
Therefore, the questions you need to answer to successfully implement your scenario with the lowest cost possible are:
What is the message size of my devices?
Do I need to display messages in near-real-time on customer's Cloud Environment, or my concern is the level of detail of the sensors during a specific time?
When I enable "burst mode" am I leveraging the batch mode of Azure IoT SDK?
To your questions:
Is it possible to allow for temporary burst speeds at varying times
during a day as long as the total number of daily messages sent does
not surpass current tier max limit?
No, the limits for example to S2 are 120 device-to-cloud send operations/sec/unit.
Can this be opened up by contacting support or something? Will it help
if the whole solution is living inside its own protected network
bubble?
No, the only exception is when you need to increase the total number of devices plus modules that can be registered to a single IoT Hub for more than 1,000,000. On that case you shall contact Microsoft Support.

can i read and write data to mass storage flash drive by lpc1768 itself alone?

i want use lpc1768 to read and write to usb mass storage device
should i use this Fig30?
Fig30
or this Fig29?
Fig29
for reading and writing to USB MASS STORAGE DEVICE?
I'm asking about hardware design of it
The USB Mass storage device is obviously an USB device and thus your LPC17xx circuit must supply the +5 volts VBUS line. That is figure 31 in the datasheet.
But that also means you have to implement the difficult USB host in the LPC17xx, and SCSI for mass storage.
You could use figure 30 (OTG) if you needed to connect the lpc17xx itself to host like a PC.
Note that SD cards are much simpler to use (via SPI) in Software. And they can be faster: 25 MBit/s instead of 12 MBit/s via USB.

What is the maximum database size in Titanium?

What is the maximum database size? Is there a chart that shows maximum database size across different devices? Can you store a pre-populated database on an SD card and plug that into the device and read that database with titanium?
There is no limit (as far as I know besides SQL limits) on the size of a database except available storage on device, but obviously your app wont really function responsively if you have a ridiculous (a million rows) amount of data stored on the device.
The chart would just be comparing phone flash memory.
Probably, but there hasn't been a lot of success. Check here, and here and here.

Data Optimization in GPS Tracking

I'm building a real-time GPS tracking system. The mobile client continuously sends location data to server and updates the location data of tracking objects every 15 seconds.
My biggest problem is that the cost of battery and internet is very high.
Is there any solution thats help optimizing data transfer between client and server ?
You know that you have a good solution when you reach
2-3 bytes per GPS position with 4-5 attributes (time, lat, lon, optionally speed, heading)
Try to avoid security, this destroys all attempts to reduce data size. The ammount of bytes that the security (signatures, headers, keys) uses is far more than that of the GPS Data packet.
Is there any solution thats help optimizing data transfer between
client and server ?
Yes, at least some tipps: Do not use XML, that blows up your data by a fatcor of 100 to 1000. Use a binary protocol. A WSDL Web Service ar not well suited for this task, too.
The less frequent the device need to communicate the better the chances to get more fixes per kbytes.
An uncompressed position: needs 12 bytes: time (4), latitude (4), longitude (4).
Different companies have differnt solution to compress the data. I know one patented solution, and one confident. More I cannot tell you.
Battery
If you disable the screen, you can record 8 hours of one per second positions on an iphone4.