If yes, how does the hardware know if there's really data or not since the level on the MOSI/MISO line never changes.
I assume that with SPI you mean Serial Peripheral Interface.
According to wikipedia, the data is sent like this:
During each SPI clock cycle, a full duplex data transmission occurs. The master sends a bit on the MOSI line and the slave reads it, while the slave sends a bit on the MISO line and the master reads it. This sequence is maintained even when only one-directional data transfer is intended.
(https://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus#Data_transmission).
Because it works with a clock, sending (only) zeros should work.
Besides that, with something like the Manchester code, the MOSI/MISO line would change. I'm not sure if this can be used with the Serial Peripheral Interface though.
Related
I have a third party device that is UART programmable.
I need to create a USB - UART bridge with a functional password (programming only after entering the correct password)
generated the code using the latest version of STM32CubeMX for Atollic TrueSTUDIO for STM32 9.3.0 ...
I transfer data between USB and UART through a buffer (one for usb-uart, and another one for uart-usb)
when I try to transfer several characters everything is OK, but when I try to transfer a large data packet, problems start due to the fact that the USB speed is much higher than the UART ...
there are two questions:
1.How do I tell USB that I need to stop transferring data and wait until the UART (buffer) is busy
2.How on the side of the microcontroller to get the baud rate set on the PC (set when the terminal is connected to the virtual COM port)
USB provides flow control. That's what you need to implement. A general introduction can be found here:
https://medium.com/#manuel.bl/usb-for-microcontrollers-part-4-handling-large-amounts-of-data-f577565c4c7d
Basically, the setup for the USB-to-UART direction should be:
Indicate that the code is ready to receive a USB packet
Receive a USB packet
Indicate that you are no longer ready to receive a USB packet
Transmit the data via UART
Start over
Step 0: Initial setup
Call USBD_CDC_SetRxBuffer to set the buffer for receiving the USB data. Unless you use several buffers to achieve higher throughput, a single call at the start of the program is sufficient.
Step 1: Ready to receive data
Call USBD_CDC_ReceivePacket. Other than what the name implies, this function indicates that the app is ready to receive data. It immediately returns before the data has actually been received.
Step 2: Receive a USB packet
You don't need to do anything here. It will happen automatically. Once it's complete, CDC_Itf_Receive will be called.
Step 3: Indicate that you are no longer ready to receive a USB packet
Nothing to do here. This happens automatically whenever a packet has been received (and double buffering is not enabled).
Step 4: Transmit the data via UART
I guess you know how to do this. It's up to you whether you want to do it in a blocking fashion or using DMA.
Since a callback is involved, you cannot put this code into the main loop. It might be possible to put all code into CDC_Itf_Receive if blocking UART is used. It would appear in the order 2, 3, 4, 1. Additionally, initialization is needed (0 and 1).
In the UART-to-USB direction, you would need to implement flow control on the UART. The USB flow control is managed by the host. Even though USB is much faster than UART, flow control is relevant as the application on the host can process data as slow as it likes to.
Regarding question 2: I'm not sure I understand it... The microcontroller cannot set the baud rate on the host. Either the host can specify a baud rate (transmitted over USB and applied to UART), or if the UART has a fixed baud rate, you can ignore baud rate (any baud rate set on the host side will work as it does not apply to USB).
So, what I have learned so far is that CPU programs the source address, dest address, word count and the direction to the DMA controller whenever it needs to transfer the data from say a harddrive. But in this example, the hard drive is just a dumb device, so it makes sense because harddrive can never initiate a data transfer.
But, what if we have connected the serial port where in certain instances we are going to get 8 bits of data. I know the DMA controller is used for large memory transfer, but say I want to do DMA for these 8 bits. But the device driver on the CPU cannot tell when the data is coming and it also can not tell how much data is coming because the serial port may send 8 bits or 16 bits or no data at all. So in this case who fills the DMA controller's count and memory addresses since the device driver is completely unknown when the data is going to come in.
Using DMA serial input is complicated when the incoming data is not a continuous stream or fixed length packets. The exact details will depend on the specific UART and DMA controller, but generally, each character that arrives will be copied to the next location in the provided DMA buffer, and an interrupt will be generated by the DMA controller when the buffer is both half-filled and completely filled.
A single byte DMA buffer serves little purpose over using the UART's data avalable interrupt, and will simply delay byte processing by one character period.
If your DMA buffer were two characters long, you'd then get an interrupt for every character (one for the half transfer, and one for the full transfer), which solves the problem of partially filled buffers not being serviced, but does not reduce the interrupt overhead at all so offers little advantage over direct UART interrupt handling. If your UART includes a FIFO buffer, that would be a better method of dealing with asynchronous serial input when only a small amount of buffering is required.
When a larger DMA buffer is used the interrupt rate is reduced, but when a buffer is incomplete you will not get an interrupt, and the data may wait indefinitely. One solution to that problem is to implement a timeout mechanism whereby if the DMA interrupt does not arrive within a time period determined by the baud rate and buffer length, then the timeout handler retrieves all data currently buffered. Such a mechanism requires care to avoid race conditions between the timeout and the DMA interrupt, and to ensure that data arriving while the timeout is being processed is not lost, or that data retrieved by the timeout is not repeated when the DMA interrupt eventually arrives.
I'm developing a USB device driver for a microcontroller (Atmel/Microchip SAMD21, but I think the question is a general one). I need multiple endpoints for control & data, and the USB hardware uses per-endpoint descriptors to (among other things) locate buffers for input and output data.
Since IN data is polled at the host's discretion it makes sense that each endpoint has its own IN buffer, so that any endpoint's data (if it has any to send) is immediately available when polled.
But as far as incoming data from SETUP & OUT transactions is concerned, it occurs to me that I can save memory by configuring all endpoints to use a shared buffer. It seems wasteful for each endpoint to have its own buffer when, given the nature of USB transactions, only one such transaction can occur at a time.
Obviously this approach requires that transaction interrupts are handled sufficiently quickly that the shared buffer is freed and prepared for a new transaction in time for whatever the next transaction might be - but this is already a requirement for the control endpoint, where some SETUP transactions are immediately followed by an OUT.
So, assuming the timing is feasible, is there any other reason why such an approach wouldn't work?
Probably not.
Normally, the USB module on a microcontroller handles OUT packets by keeping track of which packet buffers it has written data to, and it waits for your firmware to say it is done processing the buffer before accepting more data from the computer and overwriting the buffer. If an endpoint has no buffers available to receive more data, but the computer sends an OUT packet to the endpoint, the USB module typically responds to the computer with a NAK packet, which tells the computer it should retry later. In this situation, your firmware can take pretty much as long as it wants to handle the OUT packets.
By having multiple endpoints configured to use the same buffer, you mess up this system. When you receive an OUT packet on any of your endpoints, the USB module would (probably) not know that multiple endpoints use the same buffer, so it would not issue NAK packets on your other OUT endpoints. If it receives another OUT packet right away, it would write it to the same buffer, overwriting the previous packet. Therefore, whenever you receive a packet, your code would have to rush as fast as it can to do something like copying the data out of that buffer, disabling other OUT endpoints, or reassigning buffers.
Even if you can actually get this to work, it means that your scheme to save a little bit of memory turns the servicing of USB events into a real-time task (i.e. a task that requires responses from your code in a few microseconds). If you want to add another real-time task to your system later, it will be very difficult, because you always have to be ready to be interrupted by your USB-handling code.
The SAMD21 has tons of memory (32 KB) so you probably don't need to worry about optimizing this part of it.
I agree with David's Response. You didn't mention the speed of the device you are creating. A low-speed would need just a few 8-byte buffers. A full-speed, a few 64-byte buffers. High-speed, maybe eight 64-byte buffers, depending on your use. A super-speed device, your still only talking a few 512-byte buffers.
I would create a ring buffer for each endpoint. This way you are not moving data around. You are simply using a pointer that points to an entry within a memory ring. A full-speed device with a control endpoint, an interrupt endpoint, and two bulk endpoints, each endpoint having sixteen 64-byte entries per ring, is still only a total of 4k RAM, 1/8th of the total RAM.
However, I am not familiar with the SAMD21, so please check the specification to be sure this will work.
I have a relay contact closure event that needs to be timestamped accurately ( 1 msec) with a GPS and the PPS output... I am not sure how to feed the relay contact output to a microcontroller and then synchronize the microcontroller clock to the GPS ...plus how to get the UTC afterall?
Can you please help me.
thanks
If your microcontroller has at least two interrupts based on hardware pins, you could connect the relay to one of the interrupt-generating pins, and the PPS to the other interrupt-generating pin.
You will need to connect the NMEA (or other proprietary protocol of your GPS) to the corresponding port in your microcontroller. Some common buses are UART or SIP.
Then, every time that you get a PPS interrupt, you enable a global flag that can be used in the main loop to reset a counter. This counter will tell you how far apart from the PPS the relay switched (if it happens within that second). If you know the base frequency of your counter, you can convert the counter into fractions of seconds. Note that if both edges of the relay state change have to be detected, you will need an interrupt source capable to interrupt on both edges (or use two interrupts)
Then, if the Relay interrupt goes off, you can get the value of the counter upon interrupt, and save it in storage, send it to host, etc. (Note, it would be best to save the value in RAM, lift a flag of "value present", and leave the sending/storing to the main loop, then turning off the flag).
Finally, when you receive a complete NMEA message (this could be being parsed in your main loop by a state machine for instance), you can send this information to the host or storage along with the counter that you saved to time your relay state change. Note please that the NMEA message will be generated and decoded with a certain delay from the PPS, so you will need to compensate for that.
I am looking into the I2C protocol for PIC16F88X.
What I would like to do, is to enable an I2C slave to either ACK or NACK depending on the data received on the I2C.
The PIC can ACK or NACK on the I2C address sent on the line, but from what I've read it will always ACK on the subsequent received bytes. Is that correct?
In the following communication:
Start - I2c_Addr+write/ACK - Register_value/Nack
I'd like the slave to be able to Ack or Nack depending on the value in Register_value. If the slave does not understand Register_value, it should not Ack.
Could someone please either confirm that this is not possible, or tell me how to do it?
Assuming your using the MSSP peripheral
short answer: What your asking for is not probably possible with a PIC, at least without bit banging I/O lines. The reason is that ack / nack is checked on the 9th clock edge and the SSPIF interrupt is not fired until the end of the 9th clock. You could attempt to repeatedly check the BF bit as it is set as soon as the data byte is shifted into the I/O register(8th clock). If you can pull off a comparison and set the SSPOV bit before the 9th clock cycle this should generate a NACK, this is quite sketchy if you have any interrupts running.
longer answer: it sounds like your attempting to validate if the data byte the slave receives is valid or not using ack. personally i wouldn't do this, ack is to signal the integrity of the line, not verify data integrity. If the device is a slave the master by definition must know exactly how it is to work and can check the validity of the byte before pushing it out on the I2C lines. In such cases i assume you also have control over the I2C master's code, use one common header file that defines all the commands or valid data bytes that can be sent to avoid mismatches in the code.
If you must guarantee the proper byte was sent for some reason, have the master ask the slave for a response byte, have the slave return a code indicating the result of the previous transfer.
If your intent is to guarantee the integrity of the I2C line, none of these approaches really work. Your only option would be to send a bulk of a bytes at boot or periodically with a CRC and check that it matches on the slave. Generally I2C lines will either work or not, they are low speed, generally have short traces and have high allowable bus capacitance, if they arn't working you won't see any ack's at all.
My guess is no IF the I2C hardware is built-in to the PIC. All of the hardware solutions I've worked with have a state machine that can't help but ACK the second byte unless there's something wrong with the transmission (missing a bit for instance). You'd be better off making your own I2C implementation in software with bit-banging and an open-collector buffer for the ACK. Then you can do anything you want. It won't be I2C standard, so watch out if you put any devices on the bus that aren't working to your specifications. I'm not sure offhand but I think for any standard I2C device if it doesn't receive an ACK it may retransmit the data or just fault since it isn't sure who has control of the bus after a failure (signified by a NAK).