I'm using MPSSE library to have SPI communication via PC.
There are 3 main functions which handle communcation via SPI:
SPI_Read - for reading
SPI_Write - for writing
and the third one
SPI_ReadWrite, described as:
this function reads from and writes to the SPI slave simultaneously. Meaning that one bit is clocked in and one bit is clocked out during every clock cycle.
and I can't get the difference between them like in SPI signals...
What's the purpose of using it, if in SPI we need to serially do writing and reading.
Generally, a SPI transfer is always bidirectional, if we look only at the wires.
With SPI_Read() you need to look into its source or watch the sending wire to learn what is sent. Probably it is all 0 or all 1. However, only the received data is returned, and the caller does not need to provide data to send.
With SPI_Write() the received data is ignored. The caller has to provide the data to send.
And SPI_ReadWrite() reveals the true nature of this interface. With each clock one bit is sent, and one bit is received. Data to send must be provided, and received data is returned. The number of bits is the same for both directions.
It all depends on the communication partners how they handle it.
All operations on SPI are simultaneous read/write. The read-only function must write "dummy" data, while for the write-only function data is read and discarded. Most likely if you look into either SPI_Read() or SPI_Write() you will find that they are both implemented using SPI_ReadWrite(). At least you would with an MCU implementation, the MPSSE code is not SPI code but USB code for communicating with FTDI SPI/USB bridge devices, so the simplicity of SPI is not really apparent - USB is massively more complex but unlike SPI is half-duplex (excepting USB-C SS modes), so cannot truly perform simultaneously read/write - that is performed by the FTDI device not the USB connected PC which has a "virtual" SPI port
The reason is that the master and slave SPI roles are implemented as single shift-registers connected in a loop; as the master clocks data out one end of its shift-register, new data is simultaneously shifted in from the slave, so that after one complete word-write operation, new data from the slave will be present in the master shift register.
The nature of SPI is indicated by the signal names - master-out/slave-in (MOSI) and master-in/slave-out (MISO)* - both occur, one bit per SCLK edge.
Attribution CBurnett from https://en.wikipedia.org/wiki/Serial_Peripheral_Interface#/media/File:SPI_8-bit_circular_transfer.svg
In your case however what you have is:
----------------------------
| SPI/USB Bridge e.g. FT4222H|
---------- | ---------- ---------- |
| |<--SCLK---| | | ||
| SPI Slave|---MISO-->|SPI Master|<-->|USB Device||
| |<--MOSI---| | | ||
---------- | ---------- ---------- |
| ^ |
--------------------|-------
|
--------
|USB Host|
| (PC) |
--------
So the communications you are seeing is USB not SPI. The simultaneous read/write is occurring at the SPI Master/Slave level and bridged to USB art half-duplex (write followed by read). Note that Host/Device is simply a less loaded term than master/slave - the relationship and control are similar in that a USB device cannot initiate a transfer autonomously.
Not all devices can usefully exploit the simultaneous read/write capability; it is useful mostly in situations where input and output data are independent and streamed such as SPI UARTs, or perhaps in a proprietary MCU to MCU connection. In transactional operations, such as reading a device register or EEPROM memory location, you typically write a command or request and separately read the response, so separate read/write "facades" to the intrinsic read/write are provided so you don't have to handle the dummy data yourself.
Related
I have a third party device that is UART programmable.
I need to create a USB - UART bridge with a functional password (programming only after entering the correct password)
generated the code using the latest version of STM32CubeMX for Atollic TrueSTUDIO for STM32 9.3.0 ...
I transfer data between USB and UART through a buffer (one for usb-uart, and another one for uart-usb)
when I try to transfer several characters everything is OK, but when I try to transfer a large data packet, problems start due to the fact that the USB speed is much higher than the UART ...
there are two questions:
1.How do I tell USB that I need to stop transferring data and wait until the UART (buffer) is busy
2.How on the side of the microcontroller to get the baud rate set on the PC (set when the terminal is connected to the virtual COM port)
USB provides flow control. That's what you need to implement. A general introduction can be found here:
https://medium.com/#manuel.bl/usb-for-microcontrollers-part-4-handling-large-amounts-of-data-f577565c4c7d
Basically, the setup for the USB-to-UART direction should be:
Indicate that the code is ready to receive a USB packet
Receive a USB packet
Indicate that you are no longer ready to receive a USB packet
Transmit the data via UART
Start over
Step 0: Initial setup
Call USBD_CDC_SetRxBuffer to set the buffer for receiving the USB data. Unless you use several buffers to achieve higher throughput, a single call at the start of the program is sufficient.
Step 1: Ready to receive data
Call USBD_CDC_ReceivePacket. Other than what the name implies, this function indicates that the app is ready to receive data. It immediately returns before the data has actually been received.
Step 2: Receive a USB packet
You don't need to do anything here. It will happen automatically. Once it's complete, CDC_Itf_Receive will be called.
Step 3: Indicate that you are no longer ready to receive a USB packet
Nothing to do here. This happens automatically whenever a packet has been received (and double buffering is not enabled).
Step 4: Transmit the data via UART
I guess you know how to do this. It's up to you whether you want to do it in a blocking fashion or using DMA.
Since a callback is involved, you cannot put this code into the main loop. It might be possible to put all code into CDC_Itf_Receive if blocking UART is used. It would appear in the order 2, 3, 4, 1. Additionally, initialization is needed (0 and 1).
In the UART-to-USB direction, you would need to implement flow control on the UART. The USB flow control is managed by the host. Even though USB is much faster than UART, flow control is relevant as the application on the host can process data as slow as it likes to.
Regarding question 2: I'm not sure I understand it... The microcontroller cannot set the baud rate on the host. Either the host can specify a baud rate (transmitted over USB and applied to UART), or if the UART has a fixed baud rate, you can ignore baud rate (any baud rate set on the host side will work as it does not apply to USB).
I'm developing a USB device driver for a microcontroller (Atmel/Microchip SAMD21, but I think the question is a general one). I need multiple endpoints for control & data, and the USB hardware uses per-endpoint descriptors to (among other things) locate buffers for input and output data.
Since IN data is polled at the host's discretion it makes sense that each endpoint has its own IN buffer, so that any endpoint's data (if it has any to send) is immediately available when polled.
But as far as incoming data from SETUP & OUT transactions is concerned, it occurs to me that I can save memory by configuring all endpoints to use a shared buffer. It seems wasteful for each endpoint to have its own buffer when, given the nature of USB transactions, only one such transaction can occur at a time.
Obviously this approach requires that transaction interrupts are handled sufficiently quickly that the shared buffer is freed and prepared for a new transaction in time for whatever the next transaction might be - but this is already a requirement for the control endpoint, where some SETUP transactions are immediately followed by an OUT.
So, assuming the timing is feasible, is there any other reason why such an approach wouldn't work?
Probably not.
Normally, the USB module on a microcontroller handles OUT packets by keeping track of which packet buffers it has written data to, and it waits for your firmware to say it is done processing the buffer before accepting more data from the computer and overwriting the buffer. If an endpoint has no buffers available to receive more data, but the computer sends an OUT packet to the endpoint, the USB module typically responds to the computer with a NAK packet, which tells the computer it should retry later. In this situation, your firmware can take pretty much as long as it wants to handle the OUT packets.
By having multiple endpoints configured to use the same buffer, you mess up this system. When you receive an OUT packet on any of your endpoints, the USB module would (probably) not know that multiple endpoints use the same buffer, so it would not issue NAK packets on your other OUT endpoints. If it receives another OUT packet right away, it would write it to the same buffer, overwriting the previous packet. Therefore, whenever you receive a packet, your code would have to rush as fast as it can to do something like copying the data out of that buffer, disabling other OUT endpoints, or reassigning buffers.
Even if you can actually get this to work, it means that your scheme to save a little bit of memory turns the servicing of USB events into a real-time task (i.e. a task that requires responses from your code in a few microseconds). If you want to add another real-time task to your system later, it will be very difficult, because you always have to be ready to be interrupted by your USB-handling code.
The SAMD21 has tons of memory (32 KB) so you probably don't need to worry about optimizing this part of it.
I agree with David's Response. You didn't mention the speed of the device you are creating. A low-speed would need just a few 8-byte buffers. A full-speed, a few 64-byte buffers. High-speed, maybe eight 64-byte buffers, depending on your use. A super-speed device, your still only talking a few 512-byte buffers.
I would create a ring buffer for each endpoint. This way you are not moving data around. You are simply using a pointer that points to an entry within a memory ring. A full-speed device with a control endpoint, an interrupt endpoint, and two bulk endpoints, each endpoint having sixteen 64-byte entries per ring, is still only a total of 4k RAM, 1/8th of the total RAM.
However, I am not familiar with the SAMD21, so please check the specification to be sure this will work.
I have a Gateway - Node application using the LoRa module but I don't know whether to choose the LoRa module to interface UART or SPI.
Can someone help me distinguish the difference when using these two types? Example: when I have 5 Nodes connected to Gatewway, which one should I use? and same when I have 50 Nodes.
Thanks!
A UART converts the signals to RS232 signaling(NOT VOLTAGES, You will need an additional adapter chip like the FTDI 232H) to hook up to a serial port on a computer. Speeds are usually limited to less than 400 Kilobits per second(varies based on distance and devices)
If you are connecting multiple devices to the same micro-controller(Arduino), use SPI. The connection speeds are not limited by standards. It is a bus arrangement with 4 pins (clock SCLK, input MISO, output MOSI and Slave select SS) The SCLK, MISO, MOSI are connected to all devices.To chain additional devices it requires an additional SS pin per device.
SPI is going to be faster(several (<5?)Megabits per second is not uncommon (depends on length(not greater than .3 meters), wire quality, environmental noise and device specifications) and requires less discrete components.
Since LoRa speeds max out around 300kbps, a single SPI connected gateway could theoretically handle 15 LoRa transceivers on a single gateway.
Doing 15 devices may violate local RF duty cycle restrictions resulting in fines and/or imprisonment.
Please check with your regulatory institutions prior to implementation of any solution.
I would suggest using four transceivers with external antennas each pointing in a different cardinal direction(possibly offset) at each gateway. This configuration should permit 400+(depending on usage patterns) client devices per gateway.
Good evening!
I'm configuring NTP on an embedded Linux system connected with an U-Blox GPS receiver. I've used NTPD and GPSD.
I would like to know what's the technical difference between:
PPS Signal provided by the GPSD shared memory SHM, (Pseudo IP Address 127.127.28.1);
PPS Signal "Stand Alone", but always connected in some way I would like to understand, with GPS (Pseudo IP Address 127.127.22.0)
It is critical for me to understand because I really need an high level synchronization and I would like the right information from the receiver.
Searching all over the web I've found only confused answers to my doubt...
Thanks in advance!
FL
The SHM driver is not designed to provide a PPS signal by itself. So maybe your notion here is misguided.
A PPS signal is used for getting a (precise) notion of the
frequency of the local clock (the one used for measuring external signals), as it just provides a well known timing distance of the "pulses" (1s in this case). Actually pps is a frequency source.
GPSD on the other hand is communicating with some device (could be built into your HW). It then proovides the time data read from the GPS source via shared memory to ntp. This provisioning of data does not guarantee any timing relation (delay). (E.g. could occur earlier or later within the second due to load or scheduling)
From the perspective of ntp, you will have a true date/time label, but you might not know exactly when the related point in time did occur related to your local clock. (Usually not precisely enough for common ntp use cases.) This is where PPS kicks in.
Depending on how the GPS device is being connected to your local machine (parallel, serial, internal bus) you will have some way of getting an interrupt on the pulse from the pps signal. (e.g. with serial connection you usually will get a transition on the DCD pin).
The internal processing of the related interrupt will read the local clock and the resulting timing information is then provided for further processing. This information is exactly what the PPS clock discipline is using and providing to ntp. What you need to configure here, is the offset from the triggering of the pulse to reading the local clock. (Pulse usually is assumed to occur "on the second.)
So, in your configuration, it is likely that the "source" of the PPS signal is the same GPSD is using for providing date/time data (your GPS device).
However, the actual signal used for date/time data and pps is different. Date/time will use a data telegram or some register content read from the GPS device while pps will be a level change on an input pin proveded from this very device.
For details start with the interfacing information from your GPS receiver, especialy any timings stated there. Then look at ntp and figure what driver(s) will allow exploiting such input data for best time quality.
USB 2.0 specifies 4 types of transfers (in section 5.4 Transfer Types):
Control Transfers
Isochronous Transfers
Interrupt Transfers
Bulk Transfers
Section 5.8 says that Bulk Transfers provide:
Access to the USB on a bandwidth-available basis
Retry of transfers, in the case of occasional delivery failure due to errors on the bus
Guaranteed delivery of data but no guarantee of bandwidth or latency
(Emphasis mine.)
I don't see a similar statement for Control Transfers. Do they also guarantee delivery? If not, how are users expected to handle failures?
Please provide a citation(s) to support your answer.
The USB specification provides robust error detection and recovery for control transfers. The control transfer will either be completed or the USB host will know that it failed, and I think that's what "guaranteed delivery" is supposed to mean. This is important because control transfers are used to set up the device when you plug it into a computer and they are also used for many important purposes by the various USB device classes (e.g. they are used to set the baud rate of a serial port on a USB CDC ACM device).
From section 5.5.5 of the USB 2.0 specification:
The USB provides robust error detection and recovery/retransmission for errors that occur during control transfers. Transmitters and receivers can remain synchronized with regard to where they are in a control transfer and recover with minimum effort. Retransmission of Data and Status packets can be detected by a receiver via data retry indicators in the packet. A transmitter can reliably determine that its corresponding receiver has successfully accepted a transmitted packet by information returned in a handshake to the packet. The protocol allows for distinguishing a retransmitted packet from its original packet except for a control Setup packet. Setup packets may be retransmitted due to a transmission error; however, Setup packets cannot indicate that a packet is an original or a retried transmission.
The only transfer type without guaranteed delivery is isochronous. Also, the start of frame (SOF) packets don't have guaranteed delivery.