In the AM5K2E02 System-on-Chip, there is an MMU (memory management unit) in the core, and also a peripheral device MPU (memory protection unit).
On other pages, the MMU is described as a superset of MPU, i.e. MMU does everything MPU does and more, but what would happen if both an MMU and MPU are both active?
Reason for asking is we have an MMU configured and setup, but there is an MPU alive with default settings running. Who gets priority when deciding whether such-and-such process or user has access to so-and-so memory region? I can't find a clear answer, and in fact some pages seem to suggest that a SoC will have either MPU or MMU but not both, which isn't true here as the AM5K2E02 has both.
Thanks all :) and merry christmas soon!
Related
can software interrupts do some of hardware interrupts?
can it detect power failure and things and then rely only on software interrupts?
so then we wont need special hardware like interrupt controllers
While this may be technically possible I doubt you'll end up with a system that's stable or even reliable. Interrupts are especially important as hardware because they, well, interrupt the processing of other tasks asynchronously. This allows physical components at their lowest level to quickly and correctly respond to events.
Let's play out the scenario you mention and imagine a component on the motherboard detecting a power failure. Without an interrupt the best it can do is write to a register or cache. It must then rely on another piece of hardware or even the operating system to check that value. This basically means periodic polling which is not as efficient. Furthermore, if there is currently a large instruction set running that might be hogging the resources necessary to check the value you have no deterministic way of knowing when that check might occur. It could be near-instant, or it could be a second from now. If it's the latter, your computer loses power and shuts down before it can react.
I have been searching around [in vain] for some good links/sources to help understand GPIOs and why they are used in embedded systems. Can anyone please point me to some ?
In any useful system, the CPU has to have some way to interact with the outside world - be it lights or sounds presented to the user or electrical signals used to communicate with other parts of the system. A GPIO (general purpose input/output) pin lets you either get input for your program from outside the CPU or to provide output to the user.
Some uses for GPIOs as inputs:
detect button presses
receive interrupt requests from external devices
Some uses for GPIOs as outputs:
blink an LED
sound a buzzer
control power for external devices
A good case for a bidirectional GPIO or a set of GPIOs can be to "bit-bang" a protocol that your SoC doesn't provide natively. You could roll your own SPI or I2C interface, for example.
The reason you cannot find an answer is probably because if you know what an embedded system is and does, or indeed anything about digital electronic systems, then the answer is rather too obvious to write down! That is to say that if you get as far a s actually implementing a working embedded system, you should already know what they are.
GPIO pins are as a minimum, two state digital logic I/O. In most cases some or all of them may also be interrupt sources. These interrupts may have options for be rising, falling, dual edge, or level triggering.
On some targets GPIO pins may have configurable output circuitry to allow, for example, external pull-ups to be omitted, or to allow connection to devices that require open-collector outputs, and in some cases even to provide filtering of high frequency noise and glitches.
In most embedded systems, a processor will be ultimately responsible for sensing the state of various devices which translate external stimuli to digital-level logic voltages (e.g. when a button is pushed, a pin will go low; otherwise it will sit high), and controlling devices which translate logic-level voltages directly into action (e.g. when a pin is high, a light will go on; when low, it will go off). It used to be that processors did not have general-purpose I/O, but would instead have to use a shared bus communicate with devices that could process I/O requests and set or report the state of the external circuits. Although this approach was not entirely without advantages (one processor could monitor or control thousands of circuits on a shared bus) it was inconvenient in many real-world applications.
While it is possible for a processor to control any number of inputs and outputs using a four-wire SPI bus or even a two-wire I2C bus, in many cases the number of signals a processor will need to monitor or control is sufficiently small that it's easier to simply include the circuitry to monitor or control some signals directly on the chip itself. Although dedicated interfacing hardware will frequently have output-only or input-only pins (the person choosing the hardware interface chips will know how many signals need to be monitored, and how many need to be controlled), a particular family of processor may be used in some applications that require e.g. 4 inputs and 28 outputs, and other applications that require 28 inputs and 4 outputs. Instead of requiring that different parts be used in applications with different balances between inputs and outputs, it's simpler to just have one part with inputs that can be configured as inputs or outputs, as needed.
I think you have it backwards. GPIO is the default in electronics. It's a pin, a signal, that can be programmed. Everything is made up of these. For a processor, dedicated peripherals are a special case, they're extras for when you know you want a more limited function.
From a chip manufacturers perspective, you often don't know exactly what the user needs so you can't make the exact peripherals on your chip. You make generic ones instead. Many applications are so rare that there's no market for a specific chip. Only thing you can do is use GPIO or make specific hardware yourself. Also, all (unused or potentially unused) pins are worth turning into GPIO because that makes the part even more generic and reusable. Generic and reusable is very nearly the whole point of programmable chips, otherwise you would just make ASICs.
Some particularly suitable applications:
Reset parts (chips) in a system
Interface to switches, keypads, lights (all they have is one pin/signal!)
Controlling loads with relays or semicondctor switches (on-off)
Solenoid, motor, heater, valve...
Get interrupts from single signals
Thermostats, limit switches, level detectors, alarm devices...
BTW, the Parallax Propeller has practically nothing but GPIO pins. Peripherals are made in software. It works very well for many uses.
I basically wanted to know what exactly a virtual processor is. At IBM's site they define it as:
"A virtual processor is a representation of a physical processor core to the operating system of a logical partition that uses shared processors. "
I understand that if there are x processors, each of which can simultaneously perform two operations, then the system can perform 2x operations simultaneously. But where does virtual processor fit into this. And i tried looking up the difference between a logical partition and other partitions such as primary but wasn't really sure.
I'd like to draw an analogy between virtual memory and virtual processors.
Start with expectations:
A user program is written against a set of expectation about what the memory looks like (an a nice flat, large, continuous memory model is the best...)
An OS system is written against a set of expectation of how the hardware performs (what CPU protection modes operation are available, how interrupts arrive and are blocked and handled, how to talk to IO devices, etc...)
Realize that expectation can be met directly by the hardware, or by an abstraction layer
Virtual memory is a set of (specialized, not found in simple chips) hardware tools and OS services that fake a user program into thinking that it has that nice, flat, large, continuous memory space, even while the OS is busily dividing the real memory into little piece, and storing some of them on disk, bringing other back, and otherwise making a real hash of it. But your code doesn't care. Everything just works.
A virtual processor system is a set of (specialized, not found in consumer CPUs) hardware tools and hypervisor services that allow your OS to believe it has direct access to one or more processors with the expected protection modes, interrupts, etc. even though the hypervisor is busily swapping whole OS contexts onto and off of one or more real processors, starting and stopping access to IO busses, and so on and so forth. But the OS doesn't care. Everything just works.
The hardware support to do this is has only recently started to be available in "desktop" CPUs, but Big Iron has had it for ages. It is useful for a couple of reasons
Protection. In a properly protected OS, it is tough for one processes or user to spy on another. But since they can be resident in the same context, it may still be possible. Virtualizing OSs divides them by another, even thinner channel and makes it that much harder for data to leak, and malicious things to be done.
Robustness. If you can swap OS contexts in and out you migrate them from one machine to anther and checkpoint and restart. Which allows for computers that detect failures on their own processors and recover gracefully.
These are the things (aside from millions of LOC of heavily debugged, mission critical code) that have kept people paying for Big Iron.
Disclaimer: I am (mostly) hardware ignorant. This is probably my problem. However I find it hard to accept that it is not possible to debug hardware so therefore I just wanted to get some second opinions.
We have an issue. Where certain actions (swapping Usb devices in and out at run-time) can blow either the Usb hub or chip on our Usb board (it's custom hardware). It's a fuzzy problem (it appears that the degree of "blownness" can vary a bit) and the problem manifests itself in intermittent fashions with various symptoms that are very difficult to reliably reproduce (typically random corruption of packets).
This results in difficulty in ascertaining if a newly reported problem is due to this hardware fault or is actually a bug in the software. We have since implemented protection on these devices but if an unprotected device is used with a protected device it has a possibility of then tainting the (now protected) device. One of the ports is also not protected meaning that someone could still "kill off" a unit that should be safe by accidentally using the wrong port.
The upshot of this is that it is impossible to tell which of our devices suffer this issue without completely replacing ALL the hardware (we've bitten the bullet for most of our production hardware but there is still a lot of dev and QA hardware out there with this issue).
I would imagine that it could be possible, given a piece of hardware that one could use some kind of hardware diagnostics tools to determine whether the kit is faulty or not. Am I living in a dream world? My hardware department tell me that the only tests that can prove the fault would be software tests... but as I have stated the symptoms are very difficult to reproduce. As I'm not that experienced with hardware I don't know if this is the only answer or not. I therefore ask the world.
Built In Test Equipment is used for performing a Built In Test
BITE for BIT
(No bytes involved.)
It is completely, utterly normal for military/aerospace equipment to have extra hardware to test itself with.
The original IBM PC hard a surprising quantity of test hardware built in.
In the case of your equipment, a test device and some statistical analysis would do the trick.
This could be done in hardware in a dongle, but frankly would be easier to with some software.
Use two back-to-back USB to RS232 serial converters to make a USB loopback device.
Send lots of data , checksum packets and measure error rates.
I'm assuming your errors occur on the in->out as well as the out-<in side.
Really, your hardware guys need to look at some application notes; USB IS hotplug-safe IF done according to the book.
There is a cool example out on the net of opto-coupling a USB chip's connection to the board it's onto prevent this sort of thing. The USB chip is connected to the host, powered from the host, and the interface to the USB chip is SPI, which is opto-coupled back to the rest of the board.
As for you, the chips are failing partially. Injured devices may work fine for months then die. An electro-static discharge ("a static zap") can do the same thing that you describe. A device can be injured by shocks too small for you to feel.
The wires and features in semiconductors are microscopic, and easily damaged by stray electricity.
If the hardware design is mostly right probably the liekly cause of the problems you've been experiancing is ESD when the devices are handled to plug/unplug. Your devie has it's own power supply and it's ground voltage floats relative to the other end of the USB cable, until it is connected.
Hope this helps.
No it's not.
A lot of hardware manufacturers begin with hardware testing. Inputs and outputs (IO) is just a matter of evaluating where circuit flow is going. Consider the abstraction that both software and hardware deal in boolean operations.
Hardware is just a little less human readable!
When it comes down to it, hardware's line of communication is (at its most basic) HIGH and LOW through various pins.
I have a brother (in the automobile tech industry) who has used and electrometer to measure voltage on pins to isolate where the problem is (I'm not really smart enough in that field to go into more detail on how he does it).
Your problem is that the only known symptom is so hard to detect (packet corruption in USB stream), that you're going to need software (at some level) to detect it.
If you can work out why packets are getting corrupted (bad voltages?) then maybe you could detect that with hardware?
Otherwise you need some kind of robust testing kit, and software to send/receive lots of packets to look for corruption?
No. That's what oscilloscopes and logic analyzers are for. Also there is more specialized equipment such as USB testers.
The simpler the hardware is, and the more access you have to the signals, the more likely you are to be able to diagnose it in a 'purely hardware' kind of way. For example if you had a simple parallel port card plugged into a PCI slot, it would be relatively straightforward to put a bus analyzer on the PCI bus, and the adapter's output, and see if the outputs did the right thing when the card was addressed. But note you'd still need to attempt to access that card from the PCI bus, which would mean either (A) some kind of PCI bus simulation, which would be one heck of a big pile of test hardware, or (B) a cheap off-the-shelf PC with a few lines of test code.
But then at the other end of the spectrum, suppose you're dealing with a large FPGA. You can get one heck of a lot of logic into an FPGA, and you won't necessarily have access to all the test points you'd like. I've personally encountered a bug with a serial port embedded in an FPGA, where a race condition with the shift register preload register would occasionally corrupt a byte. Hypothetically the VHDL could have been reworked to bring out test points, and a pile of scopes and analyzers gathered, but from a management standpoint it was much more cost effective to try to tease the problem out with software. Under normal usage, the bug in question would have turned up once every blue moon. We iterated through speculation about the conditions that would elicit the bug, and refining the test code, until we had test software that could reproduce the bug 2-3 times a minute. At that point we could actually provide clues to the VHDL guys that helped them fix the problem quickly.
Long story short, inside of a week a hardware bug was smoked out via software, whereas starting with the same information and going 'hardware only' would likely have not been any faster, and would have required a lot of expensive test equipment. So, yeah, you probably can do it without software, but as usual it's a trade-off, and you have to find the right balance point between the amount of software vs hardware for the job.
I have an idea of how interrupts are handled by a dual core CPU. I was wondering about how interrupt handling is implemented on a board with more than one physical processor.
Is any of the interrupt responsibility determined by the physical board's configuration? Each processor must be able to handle some types of interrupts, like disk I/O. Unless there is some circuitry to manage and dispatch interrupts to the appropriate processor? My guess is that the scheme must be processor neutral, so that any processor and core can run the interrupt handler.
If a core is waiting on a disk read, will that core be the one to run the interrupt handler when the disk is ready?
On x86 systems each CPU gets its own local APIC (Advanced Programmable Interrupt Controller) which are also wired to each other and to an I/O APIC that handles routing device interrupts to the local APICs.
The OS can program the APICs to determine which interrupts get routed to which CPUs (or to let the APICs make that decision).
I imagine that a multi-core CPU would have a local APIC for each core, but I'm honestly not certain about that.
See these links for more details:
http://osdev.berlios.de/pic.html
http://www.microsoft.com/whdc/archive/io-apic.mspx
http://en.wikipedia.org/wiki/Intel_APIC_Architecture
What you're interested in is SMP Processor Affinity. Here is an excellent article about how it is handled in Linux. The Advanced Programmable Interrupt Controller (APIC) is how you manage this in a modern system. Basically, the default would be to all go to processor 0 unless you had an OS that utilized this interface to set things up properly. Also, you don't necessarily want the core that issued a command to wait on a particular interrupt. You want the less loaded cores to receive it.
I already asked this question a while back. Maybe it can offer you some insight :)
how do interrupts in multicore/multicpu machines work
I would say that it would depend on the hardware manufacturer...
However this link makes me believe most are probably handled by the primary processor and/or first core.
Another link