I'm working on a Silicon Labs EFM tiny-gecko HW, running RTX using 4.22 arm tool chain.
I have the following configuration for RTX:
- NVIC grouping 7.1
- System Tick & Pend System service interrupt priority 224.
- Both interrupts are enabled and never disabled by my code flow.
- PRIMASK and BASEPRI registers are both 0.
The RTX code in my project is a few years old, and I'm not sure which version it is.
I observed the following issue: when using isr_evt_set to trigger a task from RTC interrupt, the task execution is delayed. I found that "Pend System service" interrupt is not called when RTC interrupt ends.
The isr_evt_set puts the "Pend system service" into pending state when called from the RTC interrupt. After the RTC interrupt ends, the "Pend system service" interrupt does not become active. Instead the processor resumes thread mode and executes a low priority (Power management) task.
I set the SCB register SLEEPONEXIT bit set to 0 in the RTC interrupt. The "Pend system service" interrupt is eventually executed ~4-10 RTC cycles later.
I expect "Pend system service" interrupt to run after the RTC interrupt.
Can you explain why the cortex goes back into Thread mode after the RTC interrupt?
The Cortex M3 manual states that "Pend System Service" is an exception, and the processor has to return to Thread mode to service it.
I found that before the RTC interrupt occurs, the task responsible for putting the system to sleep uses tsk_lock(). When isr_evt_set is called from RTC interrupt, its request to activate "Pend System Service" is buffered, and services only after the processor resumes Thread mode and uses tsk_unlock(), which immediately sets the NVIC pending flag for the "Pend System Service". Using a debugger I see PendSV_Handler() is called shortly after tsk_unlock() is called.
Related
There is a jvm in a shared vm. Other developer may remote debugging by idea, and cause hang up at breakpoints.
In some reason, I need to continue the process.
I've programed an agent with jvmti, tried to receive breakpoint events for clearing them. But there are no events received.
What is the right way to receive setbreakpoint event? Or any other way to stop hangup from remote?
Thanks in advance.
In HotSpot JVM, can_generate_breakpoint_events is an exclusive capability - this means, only one JVM TI agent at a time may possess this capability.
The standard jdwp agent used for remote debugging is also a JVM TI agent. When it is loaded, no other JVM TI agent may acquire can_generate_breakpoint_events capability. As a result, your agent will not be able to set/clear breakpoints or receive breakpoint events.
What you may try to do is to modify the original libjdwp instead of trying to intercept breakpoint events in your separate agent. Or even simpler - forcibly close jdwp connections whenever you want to resume the suspended application.
I do not understand the "RESUME" and "SUSPENDED" modes in the USB protocol (USB 2.0).
The USB 2.0 specification states:
All devices must suspend if bus activity has not been observed for the length of time specified in
Chapter 7. Attached devices must be prepared to suspend at any time they are powered, whether they have
been assigned a non-default address or are configured. Bus activity may cease due to the host entering a
suspend mode of its own.
The length of time specified in Chapter 7 is 3 frames. I don't understand what does mean "no bus activity": does it means no packets at all? does it means no packets sent to this device (and so if the device detects SOF packets then it should not enter SUSPENDED mode?)
In addition, a USB device shall also enter the Suspended state when the hub port
it is attached to is disabled. This is referred to as selective suspend.
How can a hub port be disabled? Is it the hub port itself that decides to do so (under which condition?) or is it the host that sends a command to the hub to do so (what is this command?)?
Is it correct to assume that for the device point of view, suspend and selective suspend are the same because in both cases the device only sees no bus activity?
A USB device exits suspend mode when there is bus activity. A USB device may also request the host to
exit suspend mode or selective suspend by using electrical signaling to indicate remote wakeup.
I do not understand this part. Why would a USB device requests the host to exit suspend mode or selective suspend because this is always the host that initiates transactions?
Thank you for your help.
Bus activity refers to any packet seen by the device.
I don't know all the details about selective suspend, but I believe that the operating system can tell when nothing is trying to use a USB device, and then tell the USB port to suspend the device to save power.
As for why a USB device would request for the host to exit suspend mode: have you ever noticed that you can wake up your computer from sleep by pressing a key on its keyboard or clicking a button on its mouse?
This is a problem that I am trying to solve for years, periodically spending 1-2 months on it.
I am using Metrowerks IDE and ColdFire C compiler MCFCCompiler ver 4.0 to build the embedded code that uses the USB module for communication with the host. The product with this hardware has been out for eight years and pretty successful. However, along these years we were getting complaints from the field that occasionally the communication with the host hangs up and the operation is unrecoverable.
I tracked the bug down using USB sniffer and the Coldfire debug hardware and this is condition and the scenario that I find the code in.
The communication break is on the firmware side and not the driver on the host.
The hang-up happens only when sending USB firmware commands from host (windows 7) in rapid-fire from multiple threads. Every firmware command replies back to the host. So there's maximum traffic through the USB port.
I am using the implementation provided by Motorola that is well documented in USB-STAND-ALONE-DRIVER_V03.pdf (google will find it for you). There are two functions that are in my focus point and they should play nicely together: usb_in_service (called by the interrupt handler) and usb_tx_data (that initiates the transfer, which at some point will generate an interrupt).
The usb_tx_data function is implemented such that it bails out if the USB fifo still has data to send to the host. But waiting on the fifo to clear up takes the code into an infinite loop.
No more interrupt occurs after this although the USB module's registers content tells me the interrupts are enabled.
I checked that the USB module did not get reset event and is not suspended either.
The main question is whether the error is in the USB module hardware or in the code. I don't find any errata pointing to this problem. If it's the code, where is that whole in it that the logic is not accounting for?
The hot pursuit in on because we are making a new line of product based on this same firmware and I cannot release it until this is solved.
everybody i want to observe interrupt handling in my system, now i'm using windows 8.1, i can use a linux ubuntu 14.04.1 on Vmware virtual machine too.
any information about interrupt handling , counting them and watching their processing is useful. is there any application that do this monitoring?
please help me, i'm in hold,
thank you
I'd recommend trying to search for an answer before asking a question. This is shamelessly copy/pasted from http://www.linuxjournal.com/content/watch-live-interrupts.
To see the interrupts occurring on your system, run the command:
watch -n1 "cat /proc/interrupts"
The watch command executes another command periodically, in this case "cat /proc/interrups". The -n1 option tells watch to execute the command every second.
Try using -d for fancy output with highlights.
Man page link for the watch command: http://linux.die.net/man/1/watch
Introduction to Linux Interrupts (describes what /proc/interrupts is all
about):http://www.thegeekstuff.com/2014/01/linux-interrupts/
The first Column is the IRQ number.
The Second column says how many times the CPU core has been interrupted.
For interrupt like rtc [Real time clock] CPU has not being interrupted. RTC are present in electronic devices to keep track of time.
NMI and LOC are drivers used on system that are not accessible/configured by user.
IRQ number determines the priority of the interrupt that needs to be handled by the CPU.
A small IRQ number value means higher priority.
For example if CPU receives interrupt from Keyboard and system clock simultaneously. CPU will serve System Clock first since it has IRQ number 0.
IRQ 0 — system timer (cannot be changed);
IRQ 1 — keyboard controller (cannot be changed)
IRQ 3 — serial port controller for serial port 2 (shared with serial port 4, if present);
IRQ 4 — serial port controller for serial port 1 (shared with serial port 3, if present);
IRQ 5 — parallel port 2 and 3 or sound card;
IRQ 6 — floppy disk controller;
IRQ 7 — parallel port 1. It is used for printers or for any parallel port if a printer is not present.
For Windows
Original Question: How can I find out what is causing interrupts on Windows?
There are a couple of answers there you may benefit from. Like Windows Process Explorer which shows how much processor time is spent serving interrupts, Windows Performance Analyzer (WPA), the xperf command, and The DPC/ISR Action
i am working on KVM optimization for VMs' IO. I have read the KVM codes, usually all the physical interrupt will cause the VMexit and enter into KVM. Then the host's IDT will handle the corresponding physical interrupt. My question is that how KVM decide whether to inject a virtual interrupt into the guest or not? and under what situation it will inject a virtual interrupt to the guest?
Thanks
In the Documentation of kvm this is what is told about when the virtual interupt can be injected . Heres the link http://os1a.cs.columbia.edu/lxr/source/Documentation/kvm/api.txt
look at line number 905.
The struct kvm_run structure i think gives control to the application on how it makes the VM
behave.Use cscope and search for the string request_interrupt_window in the source code, You will understand how the kvm see when to enter the guest for injecting an interupt.Also go through the api.txt file it is very helpful.
Cheers
EDITED
Here's, one example of the host injecting interupts into the guest.
Assume that there was a page fault in the GUEST VM
this causes a VMEXIT
Hypervisor/KVM handles the VMEXIT
Its sees the reason for VMEXIT through VMCS control structure and find that there was page fault.
The host/KVM is responsible for memory virtualization, so it check to see if the page fault was caused
because the page was not allocated to the GUEST in which case it calls alloc_page in the HOST kernel and does a VMENTRY to resume GUEST execution.
Or the mapping was removed by the GUEST OS, in this case the KVM uses a VMCS control structure as a communication medium to inject a virtual interupt no 14 which causes the GUEST kernel to handle page fault.
This is one example of the host inserting virtual interupt. Ofcourse there are plenty of other ways/reasons to do so.
You can infact configure the VMCS to make the guest do a VMEXIT after executing EVERY INSTRUCTION this can be done using the MONITOR TRAP FLAG.
I guess you refer to assigned device interrupts (and not emulated interrupts or virt-IO interrupts which are not directly forwarded from the physical device to the guest).
For each irq of the assigned device, request_threaded_irq is called and registers kvm_assigned_dev_thread to be called upon every interrupt. As you can see kvm_set_irq is then called, and as described the only coalescing that takes place if the interrupt is masked. In x86 interrupts can be masked by rflags.if, mov-SS, due to TPR that does not allow the interrupt to be delivered or due to interrupt in service with higher priority. KVM is bound to follow the architecture definition in order not to surprise the guest.