In the SDIO Card Specification in Section 8.1.2, it is mentionedthat the DAT1 pin can act as IRQ as well in the 4-bit SD mode. What is the purpose of IRQ in the SDIO module?
IRQ is a way for the SDIO card to attract the attention of the host, by requesting an interrupt on the host - typically this will make some code run on the host, presumably in the host SDIO card driver.
By using interrupts, the host does not have to continually check the status of the SDIO card waiting for a particular condition, instead the SDIO card will be designed to raise an IRQ when that condition occurs.
Usually the SDIO card will provide a way to enable/disable interrupt requests, probably in one of the SDIO card registers. Once the host has serviced the interrupt, it is cleared via some function unique I/O operation from the host to the SDIO card.
The specific meaning of a particular interrupt request will depend completely on the particular card and driver, but for example if the SDIO card is receiving signals from an external device, the IRQ might signal that data is available. Or if the SDIO card is outputting data which is loaded (say) 16 bytes at a time from the host, the IRQ might indicate that the SDIO card can accept a further 16 bytes.
Typically in the host interrupt service routine the host will check the status of the card to determine the reason for the interrupt and then branch to code specific to that reason.
None of this is specific to SDIO - the same principles of using interrupts apply completely to any situation where I/O operations need to occur asynchronously from whatever else the host is doing.
Related
I have a third party device that is UART programmable.
I need to create a USB - UART bridge with a functional password (programming only after entering the correct password)
generated the code using the latest version of STM32CubeMX for Atollic TrueSTUDIO for STM32 9.3.0 ...
I transfer data between USB and UART through a buffer (one for usb-uart, and another one for uart-usb)
when I try to transfer several characters everything is OK, but when I try to transfer a large data packet, problems start due to the fact that the USB speed is much higher than the UART ...
there are two questions:
1.How do I tell USB that I need to stop transferring data and wait until the UART (buffer) is busy
2.How on the side of the microcontroller to get the baud rate set on the PC (set when the terminal is connected to the virtual COM port)
USB provides flow control. That's what you need to implement. A general introduction can be found here:
https://medium.com/#manuel.bl/usb-for-microcontrollers-part-4-handling-large-amounts-of-data-f577565c4c7d
Basically, the setup for the USB-to-UART direction should be:
Indicate that the code is ready to receive a USB packet
Receive a USB packet
Indicate that you are no longer ready to receive a USB packet
Transmit the data via UART
Start over
Step 0: Initial setup
Call USBD_CDC_SetRxBuffer to set the buffer for receiving the USB data. Unless you use several buffers to achieve higher throughput, a single call at the start of the program is sufficient.
Step 1: Ready to receive data
Call USBD_CDC_ReceivePacket. Other than what the name implies, this function indicates that the app is ready to receive data. It immediately returns before the data has actually been received.
Step 2: Receive a USB packet
You don't need to do anything here. It will happen automatically. Once it's complete, CDC_Itf_Receive will be called.
Step 3: Indicate that you are no longer ready to receive a USB packet
Nothing to do here. This happens automatically whenever a packet has been received (and double buffering is not enabled).
Step 4: Transmit the data via UART
I guess you know how to do this. It's up to you whether you want to do it in a blocking fashion or using DMA.
Since a callback is involved, you cannot put this code into the main loop. It might be possible to put all code into CDC_Itf_Receive if blocking UART is used. It would appear in the order 2, 3, 4, 1. Additionally, initialization is needed (0 and 1).
In the UART-to-USB direction, you would need to implement flow control on the UART. The USB flow control is managed by the host. Even though USB is much faster than UART, flow control is relevant as the application on the host can process data as slow as it likes to.
Regarding question 2: I'm not sure I understand it... The microcontroller cannot set the baud rate on the host. Either the host can specify a baud rate (transmitted over USB and applied to UART), or if the UART has a fixed baud rate, you can ignore baud rate (any baud rate set on the host side will work as it does not apply to USB).
USB 2.0 specifies 4 types of transfers (in section 5.4 Transfer Types):
Control Transfers
Isochronous Transfers
Interrupt Transfers
Bulk Transfers
Section 5.8 says that Bulk Transfers provide:
Access to the USB on a bandwidth-available basis
Retry of transfers, in the case of occasional delivery failure due to errors on the bus
Guaranteed delivery of data but no guarantee of bandwidth or latency
(Emphasis mine.)
I don't see a similar statement for Control Transfers. Do they also guarantee delivery? If not, how are users expected to handle failures?
Please provide a citation(s) to support your answer.
The USB specification provides robust error detection and recovery for control transfers. The control transfer will either be completed or the USB host will know that it failed, and I think that's what "guaranteed delivery" is supposed to mean. This is important because control transfers are used to set up the device when you plug it into a computer and they are also used for many important purposes by the various USB device classes (e.g. they are used to set the baud rate of a serial port on a USB CDC ACM device).
From section 5.5.5 of the USB 2.0 specification:
The USB provides robust error detection and recovery/retransmission for errors that occur during control transfers. Transmitters and receivers can remain synchronized with regard to where they are in a control transfer and recover with minimum effort. Retransmission of Data and Status packets can be detected by a receiver via data retry indicators in the packet. A transmitter can reliably determine that its corresponding receiver has successfully accepted a transmitted packet by information returned in a handshake to the packet. The protocol allows for distinguishing a retransmitted packet from its original packet except for a control Setup packet. Setup packets may be retransmitted due to a transmission error; however, Setup packets cannot indicate that a packet is an original or a retried transmission.
The only transfer type without guaranteed delivery is isochronous. Also, the start of frame (SOF) packets don't have guaranteed delivery.
Early Cisco routers running IOS operating system enhanced their packet processing speed by doing packet switching within the interrupt handler instead in "regular" operating system process. Doing packet processing in interrupt handler ensured that context switching within operating system does not affect the packet processing. As I understand, interrupt handler is a piece of software in operating system meant for handling the interrupts. How to understand the concept of packet switching done within the interrupt handler?
use of interrupts is preferred when an event requires some immediate attention by the operating system, or a program which installed an interrupt service routine. This as opposed to polling, where software checks periodically whether a condition exists, which indicates that the event has occurred.
interrupt service routines aren't commonly meant to do a lot of work themselves. They are rather written to reach their end as quickly as possible, so that normal execution can resume. "normal execution" meaning, the location and state previous processing was interrupted when the interrupt occurred. reason is that it must be avoided that the same interrupt occurs again while its handler is still executed, or it may be ignored, or lead to incorrect results, or even worse, to software failure (crashes). So what an interrupt service routine usually does is, reading any data associated with that event and storing it in a queue, signalling that the queue experienced mutation, and setting things such that another interrupt may occur, then resume by restoring pre-interrupt context. the queued data, associated with that interrupt, can now be processed asynchronously, without risking that interrupts pile up.
The following is the procedure for executing interrupt-level switching:
Look up the memory structure to determine the next-hop address and outgoing interface.
Do an Open Systems Interconnection (OSI) Layer 2 rewrite, also called MAC rewrite, which means changing the encapsulation of the packet to comply with the outgoing interface.
Put the packet into the tx ring or output queue of the outgoing interface.
Update the appropriate memory structures (reset timers in caches, update counters, and so forth).
The interrupt which is raised when a packet is received from the network interface is called the "RX interrupt". This interrupt is dismissed only when all the above steps are executed. If any of the first three steps above cannot be performed, the packet is sent to the next switching layer. If the next switching layer is process switching, the packet is put into the input queue of the incoming interface for process switching and the interrupt is dismissed. Since interrupts cannot be interrupted by interrupts of the same level and all interfaces raise interrupts of the same level, no other packet can be handled until the current RX interrupt is dismissed.
Different interrupt switching paths can be organized in a hierarchy, from the one providing the fastest lookup to the one providing the slowest lookup. The last resort used for handling packets is always process switching. Not all interfaces and packet types are supported in every interrupt switching path. Generally, only those that require examination and changes limited to the packet header can be interrupt-switched. If the packet payload needs to be examined before forwarding, interrupt switching is not possible. More specific constraints may exist for some interrupt switching paths. Also, if the Layer 2 connection over the outgoing interface must be reliable (that is, it includes support for retransmission), the packet cannot be handled at interrupt level.
The following are examples of packets that cannot be interrupt-switched:
Traffic directed to the router (routing protocol traffic, Simple Network Management Protocol (SNMP), Telnet, Trivial File Transfer Protocol (TFTP), ping, and so on). Management traffic can be sourced and directed to the router. They have specific task-related processes.
OSI Layer 2 connection-oriented encapsulations (for example, X.25). Some tasks are too complex to be coded in the interrupt-switching path because there are too many instructions to run, or timers and windows are required. Some examples are features such as encryption, Local Area Transport (LAT) translation, and Data-Link Switching Plus (DLSW+).
More here: http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/12809-tuning.html
I am working on an embedded project which includes two half duplex UARTS, and one full duplex UART.
UART1 is connected to Device A. UART2 is connected to Device B, and UART3 is connected to the PC. UART1 and UART2 are half-duplex, thus RX/TX modes have to be configured properly.
When a signal on UART1 is triggered, UART2 fetches some data from Device B. That data is put into a buffer, and then transmitted back to UART1, AND UART3. Device A consumes the data, and sends more items on UART1, which then has to be passed to UART2 for Device B to respond.
I was thinking about an efficient state machine that can handle the switching modes between TX/RX mode, and so far my UART code is interrupt driven. What would be some ways to tackle the flow of this program?
I don't think you will need a state machine for this case. Why not just hook up all interrupts accordingly and just forward anything received from one devivce to the other(s)?
You may want to include a TX (ring-)buffer to accomodate for different speeds of each UART and then just have a RX-ISR write the data received to the appropriate TX buffer(s), from where it will then be consumed by the other UARTs' UDRE-ISRs.
1) How can the processor recognize the device requesting the interrupt?
2) Given that different devices are likely to require different ISR, how can the processor obtain the starting address in each case?
3) Should a device be allowed to interrupt the processor while another interrupt is being serviced?
4) How should two or more simultaneous interrupt requests be handled?
1) How can the processor recognize the device requesting the interrupt?
The CPU has several interrupt lines, and if you need more devices than there are lines there's an "interrupt controller" chip (sometimes called a PIC) which will multiplex several devices and which the CPU can interrogate.
2) Given the different devices are likely to require different ISR How can the pressor obtain the starting address in each case?
That's difficult. It may be by convention (same type of device always on the same line); or it may be configured, e.g. in the BIOS setup.
3) Should a device be allowed to interrupt the processor while amother interrupt is being services?
When there's an interrupt, further interrupts are disabled. However the interrupt service routine (i.e. the device-specific code which the CPU is executing) may, if it's willing, reenable interrupts if it's willing to be interrupted.
4) How should two or more simultanement interrupt requests be handled?
Each interrupt has a priority: the higher-priority interrupt is handled first.
The concept of defining the priority among devices so as to know which one is to be serviced first in case of simultaneous requests is called priority interrupt system. This could be done with either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This program then checks with each device if it is the one generating the interrupt. The order of checking is determined by the priority that has to be set. The device having the highest priority is checked first and then devices are checked in descending order of priority.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can request an interrupt in a serial manner. This configuration is governed by the priority of the devices. The device with the highest priority is placed first.