Crystal core MPU Clock rate differences - embedded

I have a embedded system which on boot up shows as below:
Clocking rate (Crystal/Core/MPU): 12.0/400/1000 MHz
Can anybody explain me on differences between these three clock rate.
Processor is ARMv7, OMAP3xxx

As Clement mentioned, the 12.0 is the frequency in MHz of the external oscillator. Core and MPU are the frequencies of the internal PLL's.
The MPU is the Microprocessor Unit Subsystem. This is the actual Cortex-A8 core as well as some closely related peripherals. So your MPU is running at 1000 MHz or 1GHz. This is similar to the CPU frequency in your computer.
In the AM335x, the Core PLL is responsible for the following subsystems: SGX, EMAC, L3S, L3F, L4F, L4_PER, L4_WKUP, PRUSS IEP, Debugss. The subsystems may differ slightly based on the particular chip you are working with. Yours is running at 400MHz. This can be thought of as similar to the Front Side Bus (FSB) frequency in your computer though the analogy isn't exact.

12 Mhz is the frequency of the crystal oscillator present on the board to give a time reference.
A TI OMAP contains 2 cores : an ARM and a DSP. The terminology used here is not clear but it may be the frequencies of these cores. Check you datasheet to be sure.

Related

When using the SPI protocol, is the output data rate synonymous with the baud rate?

I'm trying to learn how the SPI protocol works, and I'm working on a basic project using the STM32F407G-Discovery board.
This board has a built-in accelerometer (LIS3DSH), and it uses the SPI protocol. In the user manual, it states the following:
The LIS3DSH has ±2g/±4g/±6g/±8g/±16g dynamically selectable full-scale
and it is capable of measuring acceleration with an output data rate
of 3.125 Hz to 1.6 kHz.
This accelerometer is using SPI1, which is connected to APB2. I'm using STM32CubeMX to generate the initialization code (including the clock configuration), and it looks like the APB2 peripheral clock has a default value of 84 Mhz.
Does this mean that I need to configure the APB2 peripheral clock to have such a value that it falls between the range of 3.125 Hz and 1.6 kHz? I can't imagine this is true because I can't get the value low enough
in STM32CubeMX since it throws an error if I go too low.
I'm also accounting for the baud rate control SPI register, which allows you to go as low as f-PCLK/256.
In other words, I'm a bit stuck on which clock frequency to use and which baud rate control to use.
I'm still learning embedded programming, and so my terminology might be incorrect.
the two are not related. the max SPI clock rate is 10Mhz (page 14). The out rate of 3.125Hz to 1.6Khz is how fast the chip does an acceleration conversion. At 3.125Hz, a new conversion result is ready every 320ms, and at 1.6Khz, they are available every 625us. There is a trade off between conversion rates, power consumption and accuracy. The data sheet leaves a lot of holes, I would suggest reading the MMA7660 data sheet to get a better understanding of how these types of chips work and then revert back to your datasheet for implementation details.
You could use the SPI clock frequency with up to 10MHz to get data from this chip.
(So a prescaler of 16 and the full rate (84MHz) APB2 clock would be ok)
The SPI clock determines how fast the data is transferred from the chip to the controller not how fast the chip generated new results.
To always get the newest data you could use the IRQ lines from the chip or use an timer to trigger the transmission corresponding to sampling rate.

GNU Radio and bladeRF on Raspberry Pi (simple FSK system)

I am having a problem porting a GNU Radio setup from PC (windows 10, USB3) to Raspberry Pi 2 (USB2). USB bandwidth and CPU should not be a problem I think (only around 30% utilization while running). Essentially it looks like the RPi is 'pausing' during transmission, while the PC is not. The receiver is running on PC in both cases. I am including a pic of what I see after the FSK demod when running transmitter on PC vs Pi (circled 'pause' area), as well as a picture of my (admittedly sloppy) schematic. Any help/tips is greatly appreciated.gnuradio schemreceived signals
Edit: It appears it may actually be processing limitations. Switching from 9400 baud to 2400 baud makes the issue go away. If anyone has experience with GNURadio...am I doing anything overly inefficient or should I just drop comm rate?
The first thing I would do would be to lower your sample rates.
You don't need 1.5Ms/s if you are going to keep only the lowest 32k in your low pass filter.
Then you could do the same for your second stage after the quadrature demod if it's not enough (by the way, the sample rate of your second low pass filter does not seem to match the actual sample rate of the stage which is still 1.5Ms/s if I'm not mistaken).
Anyway, Gnuradio uses a lot of processing power so try not to use a sampling rate way above what you actually need ;)
In your case, you could cut the incoming sample rate down to 64k (say 80 for safety). 18 times less samples to process might do the trick :)

SWD programming adapter

Are ARM-JTAG-20-10 and J-LINK 9-PIN CORTEX-M ADAPTER pin compatible? Why such a big price difference?
Apart from the price levels and policies of two different manufacturers ("Why is a fiat cheaper than a porsche if both allow you to cruise at the same max speed of your home country?"), the quality of the electronic hardware may differ notably, which can impact the maximum baudrate you can run with a given target PCB.
Furthermore, some JTAG pinout adapters include additional features like galvanic isolation (I haven't checked whether this applies to the Segger piece you mention.).

Xilinx ISE Board, trying to make two clocks (ZYBO FPGA)

In the reference manual for the ZYBO board that I am using, it informs me that I have up to four clocks I can use. However, when I look through the UCF file, I can only find one of them.
Considering that the ISE tools might know where it is, I used the Timing Analyzer to try to get the system to generate a pin LOC that I could use, this was a fialure though.
Then I had the idea to use the PlanAhead tools to try and see if the tools would again generate a UCF file with the needed clock pin locations. Again this failed.
Have I misunderstood the manual? Is there only one clock pin available to me?
Here is the excerpt in question (12 Clock Sources):
The ZYBO provides
a 50 MHz clock to the Zynq PS_CLK input, which is used to generate the
clocks for each of the PS subsystems. The 50 MHz input allows the
processor to operate at a maximum frequency of 650 MHz and the DDR3
memory controller to operate at a maximum of 525 MHz (1050 Mbps). The
ZYBO Base System Design configures the PS to work properly with this
input clock, and should be used as a reference when creating custom
designs.
The PS has a dedicated PLL capable of generating up to four reference
clocks, each with settable frequencies, that can be used to clock
custom logic implemented in the PL. Additionally, The ZYBO provides an
external 125 MHz reference clock directly to pin L16 of the PL. The
external reference clock allows the PL to be used completely in
dependently of the PS, which can be useful for simple applications
that do not require the processor.
The PL of the Zynq-Z7010 also includes two MMCM’s and two PLL’s that
can be used to generate clocks with precise frequencies and phase
relationships. Any of the four PS reference clocks or the 125 MHz
external reference clock can be used as an input to the MMCMs and
PLLs. For a full description of the capabilities of the Zynq PL
clocking resources, refer to the “7 Series FPGAs Clocking Resources
User Guide” available from Xilinx.
Figure 13 outlines the clocking scheme used on the ZYBO. Note that the reference clock output from the Ethernet
PHY is used as the 125 MHz reference clock to the PL, in order to cut the cost of including a dedicated oscillator for
this purpose. Keep in mind that CLK125 will be disabled when the Ethernet PHY (IC1) is held in hardware reset by
driving the PHYRSTB signal low.
Regarding your descrption:
There is one external reference clock (125 MHz) and 4 internal reference clocks from the ARM part. These 4 clocks are not accessable as a real pin but via the ARM-FPGA bridge. If I'm right this component in called PS7.
Additional resources;
- UG585 - Zynq-7000 - Technical Reference Manual chap. 25.7 PL Clocks -> schematic for PL clocks
Additionally, you can use the clock modifing blocks (MMCM or PLL) to derive new clocks from these 5 'inputs'.

What is the minimum latency of USB 3.0

First up, I don't know much about USB, so apologies in advance if my question is wrong.
In USB 2.0 the polling interval was 0.125ms, so the best possible latency for the host to read some data from the device was 0.125ms. I'm hoping for reduced latency in USB 3.0 devices, but I'm finding it hard to learn what the minimum latency is. The USB 3.0 spec says, "USB 2.0 style polling has been replaced with asynchronous notifications", which implies the 0.125ms polling interval may no longer be a limit.
I found some benchmarks for a USB 3.0 SSDs that look like data can be read from the device in just slightly less than 0.125ms, and that includes all time spent in the host OS and the device's flash controller.
http://www.guru3d.com/articles_pages/ocz_enyo_usb_3_portable_ssd_review,8.html
Can someone tell me what the lowest possible latency is? A theoretical answer is fine. An answer including the practical limits of the various versions of Linux and Windows USB stacks would be awesome.
To head-off the "tell me what you're trying to achieve" question, I'm creating a debug interface for the ASICs my company designs. ie A PC connects to one of our ASICs via a debug dongle. One possible use case is to implement conditional breakpoints when the ASIC hardware only implements simple breakpoints. To do so, I need to determine when a simple breakpoint has been hit, evaluate the condition, if false set the processor running again. The simple breakpoint may be hit millions of times before the condition becomes true. We might implement the debug dongle on an FPGA or an off-the-shelf USB 3.0 enabled micro-controller.
Answering my own question...
I've come to realise that this question kind-of misses the point of USB 3.0. Unlike 2.0, it is not a shared-bus system. Instead it uses a point-to-point link between the host and each device (I'm oversimplifying but the gist is true). With USB 2.0, the 125 us polling interval was critical to how the bus was time-division multiplexed between devices. However, because 3.0 uses point-to-point links, there is no multiplexing to be done and thus the polling interval no longer exists. As a result, the latency on packet delivery is much less than with USB 2.0.
In my experiments with a Cypress FX-3 devkit, I have found that it is easy enough to get an average round trip from Windows application to the device and back with an average latency of 30 us. I suspect that the vast majority of that time is spent in various OS delays, eg the user-space to kernel-space mode switch and the DPC latency within the driver.
I've got a couple of resources for you, one I've just downloaded which is the complete specs ... several pdfs zipped up for USB3, and here is short excerpt from page 58,59 (USB 3_r1.0_06_06_2011.pdf):
USB 2.0 transmits SOF/uSOF at fixed 1 ms/125 μs intervals. A device driver may change the interval with small finite adjustments depending on the implementation of host and system software. USB 3.0 adds mechanism for devices to send a Bus Interval Adjustment Message that is used by the host to adjust its 125 μs bus interval up to +/-13.333 μs.
In addition, the host may send an Isochronous Timestamp Packet (ITP) within a relaxed timing window from a bus interval boundary.
Here is one more resource which looked interesting which deals with calculating latency.
You make a good point about operating system latency issues, especially in not real time operating systems.
I might suggest that you check on SuperUser too, maybe someone has other ideas. CHEERS
I dispute the marked answer.
On Windows there is no way to achieve the stated roundtrip latency over USB. SuperSpeed (3.0) or not. The documentation states:
The number of isochronous packets must be a multiple of the number of packets per frame.
https://learn.microsoft.com/en-us/windows-hardware/drivers/usbcon/transfer-data-to-isochronous-endpoints
The packets per frame is given by the bIntervaland also determines the polling interval. E.g. if you want to achieve a transfer every microframe (125usec) you will need to submit 8 transfers per URB (USB Request Block), which means a scheduling service interval of 1ms.
Anything else requires your own kernel-mode driver or is out-of-spec.
On RT Linux I can confirm roundtrips of 2*125usec + some overhead.
Excerpts from embedded.com: "USB 3.0 vs USB 2.0: A quick reference summary for the busy engineer"
Communication architecture differences
USB 2.0 employs a communication architecture where the data transaction must be initiated by the host. The host will frequently poll the device and ask for data, and the device may only transmit data once it has been requested by the host. The high polling frequency not only increases power consumption, it increases transmission latency because the data can only be transmitted when the device is polled by the host. USB 3.0 improves upon this communication model and reduces transmission latency by minimizing polling and also allowing devices to transmit data as soon as it is ready.
...
Timestamp enhancements
Unlike USB 2.0 cameras, which can range in accuracy from 0 to 125 us, the timestamp originating from USB 3.0 cameras is more precise, and mimics the accuracy of the 1394 cycle timer of FireWire cameras.
...
USB 3.0 -- or Super-speed USB -- overcomes key limitations of other specifications all these limitations with six (over IEEE 1394b) to nine (over USB 2.0) times higher bandwidth, better error management, higher power supply, ... and lower latency and jitter times.
P.S. also it says about "longer cable lengths" for USB 3.0, but other paragraph contradicts to this & says upto 5m for USB 2.0, upto 3m for USB 3.0.