I2C Recovering from Clock Stretching by the Master - clock

In clock stretching, if the slave holds the clock line low, the master should wait to conclude about any ACK on the line. Since the slave can release the clock signal at any time, how do we interpret the SDA line value if the slave releases the clock, e.g. 3/4 of the way through the master's clock period? Let me illustrate with an example
Consider an I2C master with a 100 kHz clock rate (10 us period).
When the master is transmitting, there is a rising edge on SCL every 5 us.
During the ACK period, assume the slave holds the SCL line low for the first 7.5 us of the ACK period
At the 7.5 us assume the slave releases SCL while sending out SDA low.
In this case the SCL line will be high for 2.5 us before going low again and then proceeding with its 10 us period.
Further, assume the slave then allows SDA to go high before the next rising edge of SCL (i.e. it holds SDA low for less than 7.5 us)
Which rising edge of SCL indicates the valid SDA value?
Is it the first rising edge (where SCL only stays high for 2.5 us)?
Or is it the 2nd rising edge that is part of a full SCL clock period (5 us low, followed by 5 us high)?

I think your premise is false to begin with.
For a 100KHz clock, the rising-edge on SCL occurs every 10 uS.
Therefore, there's no 2nd rising-edge that is of concern.

Related

Need burst speed messages per second for devices at various times during a day with Azure IoT hub

While Azure Event hub can have thousands and million? of messages per second, the Azure IoT hub has a surprisingly low limitation on this.
S1 has 12 msg/sec speed but allow 400.000 daily msg pr. unit
S2 has 120 msg/sec speed but allow 6.000.000 daily msg pr. unit
S3 has 6000 msg/sec speed but allow 300.000.000 daily msg pr unit.
Imagine an IoT solution where your devices normally sends 1 message every hour, but have the ability to activate a short "realtime" mode to send messages every second for about 2 minutes duration.
Example: 10.000 IoT devices:
Let's say 20% of these devices happens to start a realtime mode session simultaneously 4 times a day. (We do not have control over when those are started by individual customers). That is 2000 devices and burst speed needed is then 2000 msg/second.
Daily msg need:
Normal messages: 10.000dev * 24hours = 240.000 msg/day
Realtime messages daily count: 2.000dev * 120msg(2 min with 1 msg every second) * 4times a day = 960.000 messages
Total daily msg count need: 240.000 + 960000 msg = 1.200.000 msg/day.
Needed Azure IoT hub tier: S1 with 3 units gives 1.200.000 msg/day. ($25 * 3units = $75/month)
Burst speed needed:
2000 devices sending simultaneously every second for a couple of
minutes a few times a day: 2000 msg/second. Needed Azure IoT hub
tier: S2 with 17 units gives speed 2040 msg/second. ($250 * 17units =
$4250/month) Or go for S3 with 1 unit, which gives speed 6000
msg/second. ($2500/month)
The daily message count requires only a low IoT hub tier due to the modest messages per day count, but the need for burst speed when realtime is activated requires an unproportionally very high IoT hub tier which skyrockets(33 times) the monthly costs for the solution, ruining the businesscase.
Is it possible to allow for temporary burst speeds at varying times during a day as long as the total number of daily messages sent does not surpass current tier max limit?
I understood from an article from 2016 by Nicole Berdy that the throttling on Azure IoT hub is in place to avoid DDOS attacks and misuse. However to be able to simulate realtime mode functionality with Azure IoT hub we need more Event Hub like messages/second speed. Can this be opened up by contacting support or something? Will it help if the whole solution is living inside its own protected network bubble?
Thanks,
For real-time needs definitely, always consider Azure IoT Edge and double check if it is possible to implement it on your scenario.
In the calculations you did above you refer, for example that S2 has 120 msg/sec speed. That is not fully correct. Let me explain better:
The throttle for Device-to-cloud sends is applied only if you overpass 120 send operations/sec/unit
Each message can be up to 256 KB which is the maximum message size.
Therefore, the questions you need to answer to successfully implement your scenario with the lowest cost possible are:
What is the message size of my devices?
Do I need to display messages in near-real-time on customer's Cloud Environment, or my concern is the level of detail of the sensors during a specific time?
When I enable "burst mode" am I leveraging the batch mode of Azure IoT SDK?
To your questions:
Is it possible to allow for temporary burst speeds at varying times
during a day as long as the total number of daily messages sent does
not surpass current tier max limit?
No, the limits for example to S2 are 120 device-to-cloud send operations/sec/unit.
Can this be opened up by contacting support or something? Will it help
if the whole solution is living inside its own protected network
bubble?
No, the only exception is when you need to increase the total number of devices plus modules that can be registered to a single IoT Hub for more than 1,000,000. On that case you shall contact Microsoft Support.

Behaviour of redis client-output-buffer-limit during resynchronization

I'm assuming that during replica resynchronisation (full or partial), the master will attempt to send data as fast as possible to the replica. Wouldn't this mean the replica output buffer on the master would rapidly fill up since the speed the master can write is likely to be faster than the throughput of the network? If I have client-output-buffer-limit set for replicas, wouldn't the master end up closing the connection before the resynchronisation can complete?
Yes, Redis Master will close the connection and the synchronization will be started from beginning again. But, please find some details below:
Do you need to touch this configuration parameter and what is the purpose/benefit/cost of it?
There is a zero (almost) chance it will happen with default configuration and pretty much moderate modern hardware.
"By default normal clients are not limited because they don't receive data
without asking (in a push way), but just after a request, so only asynchronous clients may create a scenario where data is requested faster than it can read." - the chunk from documentation .
Even if that happens, the replication will be started from beginning but it may lead up to infinite loop when slaves will continuously ask for synchronization over and over. Redis Master will need to fork whole memory snapshot (perform BGSAVE) and use up to 3 times of RAM from initial snapshot size each time during synchronization. That will be causing higher CPU utilization, memory spikes network utilization (if any) and IO.
General recommendations to avoid production issues tweaking this configuration parameter:
Don't decrease this buffer and before increasing the size of the buffer make sure you have enough memory on your box.
Please consider total amount of RAM as snapshot memory size (doubled for copy-on-write BGSAVE process) plus the size of any other buffers configured plus some extra capacity.
Please find more details here

USB Packet Length With Report ID

I'm asking this question because the USB HID Documentation isn't very explicit about this. My question is in regards to Full Speed USB HID devices and their respective Report Descriptors. I have a device, with a Report ID of 2. The Report Count in the Report Descriptor is set to 64. Now, my current understanding, is that the Report Count is preceded by the Report ID when transferring a USB packet. Meaning...the size of the USB packet will be the size specified in the Report Count plus one byte for the Report ID, totaling a size of 65 bytes for the total transfer. I've tried this and it's working.
My question here is, is this a correct understanding of the USB spec, or am i exploiting something that could be patched later on by Windows updates or Mac updates, etc...?
According to the USB HID spec,a USB transaction is limited to 64 bytes for high speed devices. However, this is outdated information since high speed devices can reach 1024 bytes per transfer. Full speed devices are now specified to have 64 bytes maximum per transfer. It also specifies that the Report Count refers to the amount of data fields in a report transfer. It doesn't say USB transaction, just Report Transfer.
For Report ID's, the USB HID spec states, "Report ID items are used to indicate which data fields are represented in each report structure. A Report ID item tag assigns a 1-byte identification prefix to each report transfer."
This leads me to believe, that although it says that full speed devices are limited to 64-bytes per USB transaction, that limit does not take into account Report ID's. Is this correct?
No, the report ID counts as a data. With the report ID the remaining report data must not be longer than 63 bytes.
Note that this limit is only enforced by hardware in full speed mode. High speed interrupt endpoints can be up to 1024 bytes per transfer.
The current HID spec version 1.11 is from 2001, and thus predates USB 2.0 high speed quite a bit. Interrupt Transfers longer than 64 bytes where not available.
You may want to check the behavior of your device once it is connected to an old USB 1.1 (full speed) hub.

What is the correct definition of interrupt latency in RTOS?

I read two different definition for 'interrupt latency' in RTOS.
"In computing, interrupt latency is the time that elapses from when an interrupt is generated to when the source of the interrupt is serviced"
(source: https://en.wikipedia.org/wiki/Interrupt_latency )
"The ability to guarantee a maximum latency between an external interrupt and the start of the interrupt handler."
(source: What makes a kernel/OS real-time? )
Now, my question is what is the correct definition of 'interrupt latency'?
For example:
External Interrupt occurrence time stamp: 00 hr:00 min:20 seconds
Time stamp when execution is jumped inside the ISR: 00 hr:00 min :25 seconds
Time stamp when execution exits ISR after servicing: 00 hr:00 min :43 seconds
Now what is the interrupt latency time? Is it 5 seconds? or 23 seconds?
I think the first definition is correct, but you have misunderstood how interrupts work in practice and what it means by "serviced".
The control flow is three stage HW interrupt -> interrupt service routine -> process. The ISR usually is very short and merely clears the source of the interrupt and marks a process as ready to be run.
An example is you have a process that calls read to access data on disk. This process will block until the disk has performed the IO. Once the IO occurs the HW raises an interrupt to notify the CPU which goes in to the ISR, clears the interrupt and then sets the process that was blocked as able to be scheduled.
Why is this the interrupt latency that needs to be measured? Because this is where the actual processing required is doing the work. And if the interrupt needs a response with in a specified time (i.e. it is a real time system) then we need to know the time from an interrupt until we start processing that response. Which means we need to know the latency from the interrupt until our real time process gets scheduled.

Who runs the scheduler in operating systems when CPU is given to user processes?

If there are 10 processes P1,P2...P10 and are scheduled using round robin policy by the scheduler to access the CPU.
Now when Process P1 is using the CPU and the current time slice has expired, P1 needs to be preempted and P2 needs to scheduled. But since P1 is using the CPU, who preempts the P1 and schedules P2 ?
We may Scheduler does this, but how does scheduler run when CPU is held by P1 ?
It's exactly like jcoder said but let me elaborate (and make an answer instead of a comment)
Basically, when your OS boots, it initializes an interrupts vector where the CPU upon a given interrupt calls the appropriate interrupt handler.
The OS, also during boot time, will check for the available hardware and it'll detect that your board has x number of timers.
Timers are simply hardware circuits that tick using a given clock speed and they can be set to send an interrupt after a given time (each with a different precision usually, depending on its clock speed and other things)
After the OS detects the timers, it sets one of them, for example, to send an interrupt every 50 ms; now every 50 ms the CPU will stop whatever it's doing and invoke that interrupt handler, usually the scheduler code, which in turn will check what's the currently running process and make a decision to keep it or not depending on the scheduling policy.
The scheduler, like most of the OS actually, is a passive thing that acts only when there's some event.
Based on your Question P1 needs to be preempted and P2 needs to scheduled so there is a concept of CPU scheduler (CPU scheduler is the process of Operating System, that continuously watching the running process) responsibility to selects process among the processes in memory that are ready to execute, and allocates the CPU to one of them.
CPU scheduling is take place if a process:
List item
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Dispatcher module gives control of the CPU to the process selected by the CPU scheduler;