How does a hardware interrupt trigger software handlers without any prior setup [closed] - embedded

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am currently learning about processor interrupts, and have run into some confusions. From what I understand, a processor has a set of external interrupts for peripherals. That way manufactures can provide a way to interrupt the processor via their own peripherals. I know that with this particular processor (ARM Cortex M0+) that once an external interrupt line is triggered, it will go to it's vector table and corresponding interrupt request offset and (I could be wrong here) will execute the ARM thumb code at that address.
And if I understand correctly, some processors will look at the value at said IRQ address, which will point to the address of the interrupt handler.
Question 1
While learning about the ARM Cortex M0+ vector table, what is the thumb code doing at that address? I am assuming it is doing something like setting the PC register to the interrupt handler address, but that is just a stab in the dark.
Question 2
Also the only way that I have found so far to handle the EIC interrupts is to use this following snippet
void EIC_Handler() {
// Code to handle interrupt
}
I am perplexed how this function is called without setup or explicit reference to it in my actual c code. How does the program go from vector table look up to calling this function?
EDIT #1:
I was wrong about the vector table containing thumb code. The vector table contains addresses to the exception handlers.
EDIT #2:
Despite getting the answer I was looking for, my question apparently wasn't specific enough or was "off-topic", so let me clarify.
While reading/learning from multiple resources on how to handle external interrupts in software, I noticed every source was saying to just add the code snippet above. I was curious how the interrupt went from hardware, all the way to calling my EIC_Handler() without me setting anything up other than defining the function and the EIC. So I researched what a vector table is and how the processor will go to certain parts of it when different interrupts happen. That still didn't answer my question, as I wasn't setting up the vector table myself, yet my EIC_Handler() function was still being called.
So somehow at compile time, the vector table had to be created and the corresponding IRQ handle pointing to my EIC_Handler(). I searched through
a good amount of SAML22 and Cortex M0+ documentation (and mis-read that the vector table contained thumb code) but couldn't find anything on how the vector table was being set up, which is why I decided to look for an answer here. And I got one!
I found that the IDE (Atmel studio) and the project configuration I had chosen came along with a little file defining weak functions, implementation of the reset handler, and the vector table. There was also a custom linker script grabbing the addresses to the functions and putting them into the vector table, which if a weak function was implemented, it would point to that implementation and call it when the appropriate interrupt request occurred.

For the Cortex M0 (and other cortexes? corticies?) the vector table doesn't contain thumb code, it is a list of addresses of functions which are the implementation of your exception handlers.
When the processor gets an exception it first pushes a stack frame (xPSR, PC, LR, R12, R3-R0) to the currently active stack pointer (MSP or PSP), it then fetches the address of the exception handler from the vector table, and then starts running code from that location.
When there is a POP instruction which loads the PC, or a BX instruction from within the exception handler the processor returns from the exception handler, it destacks the stack frame which was pushed and carries on executing from where it left off. This process is explained in the Cortex M0+ User Guide - Exception Entry And Exit
For question 2, the vector table in the Cortex M0/M0+ is usually located at address 0x00000000. Some Cortex M0/M0+ implementations allow remapping of the vector table using a vector table offset register within the system control block, others allow you to remap which memory is available at address 0x00000000.
Depending on which tool set/library you're using there are different ways of defining the vector table, and saying where it should live in memory.
There are usually weakly linked functions with the name of the exceptions available for your microcontroller, which when you implement them in your source files are linked instead of the weak functions, and their addresses get put into the vector table.
I have no experience with Atmel based ARMs, but #Lundin in the comments says the vector table is located in a "startup_samxxx.c" file. If you've started from scratch it is up to you to ensure you have a suitable vector table, and it's located in a sensible place.

Related

Interpreting Cortex M4 hard fault error debug info

Sorry - this is long! I'm sure I'll get some TL;DRs. :/
I'm not at all new to the world of Cortex M3/4; I have encountered plenty of hard fault errors in the past and, without exception, they have been due to stack overflow on FreeRTOS. However, in this case, I'm really struggling to track down a hard fault on a system that has someone else's old code that I have slightly modified.
I have a system with an LCD and touch screen. We have new hardware, which is almost identical to the old hardware other than it changing from an LPC1788 to a drop-in equivalent LPC4088 and the touch screen being I2C rather than SPI.
I'm using Keil uvision (which is new to me) with an NXP4088 which is an M4 core and Keil RL-ARM RTOS (also new to me) which is using C/C++ hybrid, the C++ also not something I have much experience with. On top of this, there is Segger emWin (which I've never used) closed source code where it always seems to be crashing. It will render a few screens, read the touch screen buttons etc and then fall over. Sometimes it falls over immediately though.
I have followed this:
http://www.keil.com/appnotes/files/apnt209.pdf
I have attached a picture of the debugger/IDE when it crashes below (click to enlarge).
When it crashes, the highlighted green task in the OS is, without exception, ApplicationTask (which I have not modified).
If I am reading the info correctly the Keil uvision debugger tells me that the stack being used was the MSP stack which is at address 0x20003238. There's a memory dump below:
If I understand correctly, this means that R0, 2, 3 and 12 are 0, the program counter is at 0 as are LR and PSR. However, this goes against what's in the Call Stack + Locals window in the first picture. If I right click on the 0x00005A50 underneath ApplicationTask:4 and choose caller code, it itells me it is
BL.W GUI_ALLOC_UnlockH
Which is in the emWin binary blob I think.
However, if I look at 0x20001B60 (which is the PSP stack value) as below:
That seems to tally up much better with what the Call Stack + Local Window tells me. It also seems to tell me that it's crashing in emWin and extensive Googling shows that Segger always completely wash their hands of any possibility their closed source code could be at fault. To be fair, it's unlikely as it's been working OK until I modified the code to use an I2C touch screen interface rather than SPI. However, where it's crashing (or seems to be) is nothing to do with the code I have modified.
Also, this window below:
Gives the BFAR address as 0xF00B4DDA and the memory manager fault address as 0xF00B4DDA. I don't know whether I should be interpreting this as to being the issue.
I have found a few other posts around the web, including one staggeringly similar one to this here on Stack Overflow (but all have no solution associated with them) where people have the same issue.
So, my questions are:
Am I reading this data correctly and understanding the Keil document I linked to? I really feel I must be missing something with this MSP/PSP issue.
Am I using the caller code function with uvision correctly? The bit where I right click on Call Stack + Locals' address below ApplicationTask:4 and it always seems to take me to some Segger code I can't examine and surely isn't what's at fault.
Should I really be reading the issue as a bus fault address with it trying to read from or write to 0xF00B4DDA which is reserved space?
I tried implementing a piece of code such as this:
https://blog.frankvh.com/2011/12/07/cortex-m3-m4-hard-fault-handler/
But that just stops the whole system running properly and ends up in at BKPT instruction in some init code. On top of this, I am not convinced this kind of thing would tell me any more than uvision does, other than it showing me slightly faster and with zero effort. Am I right in this latter assumption?

CC2540 SPI SD card

I am working on CC2540 with 128K Flash. My target is to build an SPI interface between CC2540 and a SD card. By now I have built the interface using Chan's library and the SimpleBLEPeripheral example, without errors and warnings. But when I am trying to call SD_SPI_initialization() from osal_init_tasks or Periodic function, then everything stops.
I need to understand some basic points in order to proceed! Has anyone achieved an interface like this in order to give some guidelines?
I have already asked about in TI forum, but none answers.
I also thought to use the HostTestRelease sample/project and especially, the CC2540SPI version but it gives some errors concerning stack or OSAL_CB_TIMER.

ARM Cortex-M3 Startup Code

I'm trying to understand how the initialization code works that ships with Keil (realview v4) for the STM32 microcontrollers. Specifically, I'm trying to understand how the stack is initialized.
In the documentation on ARM's website it mentions that one of the routines in startup_xxx.s, __user_initial_stack_heap, should not use more than 88 bytes of stack. Do you know where that limitation is coming from?
It seems that when the reset handler calls System_Init it is executing a couple functions in a C environment which I believe means it is using some form of a temporary stack (it allocates a few automatic variables). However, all of those stack'd items should be out of scope once it returns back and then calls __main which is where __user_initial_stack_heap is called from.
So why is there this requirement for __user_initial_stack_heap to not use more than 88 bytes? Does the rest of __main use a ton of stack or something?
Any explanation of the cortex-m3 stack architecture as it relates to the startup sequence would be fantastic.
You will see from the __user_initial_stackheap() documentation, that the function is for legacy support and that it is superseded by __user_setup_stackheap(); the documentation for the latter provides a clue ragarding your question:
Unlike __user_initial_stackheap(), __user_setup_stackheap() works with systems where the application starts with a value of sp (r13) that is already correct, for example, Cortex-M3
[..]
Using __user_setup_stackheap() rather than __user_initial_stackheap() improves code size because there is no requirement for a temporary stack.
On Cortex-M the sp is initialised on reset by the hardware from a value stored in the vector table, on older ARM7 and ARM9 devices this is not the case and it is necessary to set the stack-pointer in software. The start-up code needs a small stack for use before the user defined stack is applied - this may be the case for example if the user stack were in external memory and could not be used until the memory controller were initialised. The 88 byte restriction is imposed simply because this temporary stack is sized to be as small as possible since it is probably unused after start-up.
In your case in STM32 (a Cortex-M device), it is likely that there is in fact no such restriction, but you should perhaps update your start-up code to use the newer function to be certain. That said, given the required behaviour of this function and the fact that its results are returned in registers, I would suggest that 88 bytes would be rather extravagant if you were to need that much! Moreover, you only need to reimplement it if you are using scatter loading file as described.

CUDA fmaf function [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am trying to optimize a CUDA code. I replaced expression
result = x*y+z
with
result = fmaf(x,y,z)
But, it gives an error - CUDA error: kernel launch failure (7): too many resources requested for launch
As #JackOLantern indicated, it's likely the device code compiler will make this kind of optimization for you. You can compare the two cases to see what kind of code has been emitted by using:
nvcc -ptx -arch... mycode.cu
to see what kind of PTX code got emitted in each case, or:
cuobjdump -sass myapp
to see what kind of SASS (device machine code) got emitted in each case.
You haven't supplied any actual code, but the "too many resources requested for launch" in the context of this question is most likely due to requesting too many registers per threadblock ((registers per thread) * (threads per block) should be less than the maximum registers allowable per block, i.e. per multiprocessor).
You can determine the maximum registers allowable per block for your device using the deviceQuery sample code or from the programming guide. (registers per multiprocessor)
You can find out how many registers per thread the compiler is using by specifying:
-Xptxas -v
as additional command-line switches when compiling your code.
You can use the launch bounds qualifier to instruct the compiler to use fewer registers per thread.

what is/where can i find more info on "HI2COUT"

looking to bit-bang the I2C interface of a MCP23017 with an ATtiny13A, a lot of places mention HI2COUT as a method to send data on the I2C bus but i have no clue if this is part of a language or a Library or even a description of what happens when called. so the questions:
1) where can i get info on HI2COUT?
2) if any one has ever interfaces with an MCP23017 can you post the proper sequence to set 1 (or all) pins as output and set HIGH? (this includes start, write address, write register IOCON, ..., stop, etc...)
3) this may be to "Hardware" like for stackoverflow if anyone knows of a site better suited for this question (or may have the answer) please let me know.
Do you mean you're interested in programming the ATtiny13A (so that it can talk to a target device, which happens to be a MCP23017 but that's not an important detail)?
Just guessing, HI2COUT might be the name of a memory-mapped register to output data to the I2C peripheral of a microprocessor. However, looking at the ATtiny13A data sheet and the MCP23017 data sheet, I can't see such a register named. Perhaps that is the name of a register for an I2C peripheral of a different type of microprocessor?
The MCP23017 has I2C hardware built-in--see section 1.3.2 "I2C Interface" starting on page 5 of the MCP23017 data sheet. It will tell you how to do I2C on that device. But assuming it's the ATtiny13A you want to program, it looks as though it has no I2C hardware, so as you say, bit-banging is needed.
I suggest doing an Internet search for "ATtiny13A i2c" and you should be able to find several examples.