I want to know how one single irq line is shared among multiple devices, i mean how they are physically connected at hardware level, do they use multiple APIC controllers for this, or what other methods are used.
The most basic way to connect multiple devices to a single interrupt request line, so that every device can activate a request, is to use open collector.
When the request is granted, the acknowledge signal may be forwarded from device to device using a daisy chain.
Related
I'm facing issues when communicating with devices over USB hub. When enumerating devices directly to host port, it does work, some devices over usb hub have issues.
Setup: STM32F103C8 - MAX3421E - LUFA (usb stack) (ported to MAX3421E (host) and STM32F103C8T6 (device)) - USB Full-Speed setup
Scenario:
When I attach device directly to host, I don't experience any issues enumerating almost all (some devices seems to be faulty and have weird/nonstandard behavior) devices. But when I try to enumerate over usb hub, devices starts to behave very strangely. I'm receiving much more NAK's from devices than when connected directly to host. Some devices are able to return Device Descriptor, but retrieving Configuration Descriptor fail. Some devices return Toggle Error after several NAK's, this could be remedied so far by delaying retry IN token. Also there is different behavior of devices when connected over different hubs. I.e. one device has no problems when connected to HUB1, but have issues when connected to HUB2. Then I have HUB3 (7 port) which internally acts as HUB in HUB. On this HUB3 device working fine on port behind secondary internal hub, but not on primary ports exposed over "root" hub.
I'm in suspicion that hub's TT could be somehow interfering with usb communication, but according to information I have found, TT should not be enabled under Full-Speed setup.
I have checked (many times) that I'm setting correct device address assigned during SetAddress phase (which is proved by returning Device Descriptor). When I step debug it seems that I can get Configuration Descriptor also, but while in normal system run, it isn't retrieved, but only over hub.
Does anyone has any ideas, what to look after? I've run out of ideas here after week of trying to find a root cause.
Thanks
so...
- as usual after searching for root cause, solution after days of trying comes naturally after asking on somewhere (this is hapenning to me always, but I do try prior asking always)
- when using hubs, make sure you don't suspend SOF generation during control transfers. LUFA just resume bus inside control transfer routines, so make sure you don"t stops and reenable SOF within (my fault as I'm using ported version to MAX)
- if you have tight main loop make sure you don"t reinitialize usb transfer without completion of previous try, but if you do so, check you don't owerwrite data which haven"t been processed yet fully (especially when using interrupt-driven transfer complete processing) [things seems to work when you have quite some debug output, as it delay that time critical transfers]
Enumeration over hub isuues are now second to none. Small glitches are subject for tweaking.
Unfortunately as I was in question for electrical issues, I had to unsolder usb host shield and soldered another one, which in light of new information seems unneeded. Nevermind, I have trained my soldering skills.
I have an embedded system that controls a motor using pwm and some other things, I send commands through a serial connection, which is connected to a Fastrack Wavecom Supreme GSM Module. However, the module connected to the embedded system (the client), fails to send the message to the server module.
I have been able to send messages back and forth between the two wavecom modules, however, when I try and send from my PIC18F45k22 to the wavecom module, it fails.
Any ideas of what could be going wrong?
You did not specify what type of serial communication you are using. For instance, if you are using the PIC's SPI module you may be sampling on the wrong edge of the clock. There are at least 2 common SPI modes widely used and 4 all together. If you are using the PIC's UART there are "a whole bucket full" of setting that may be off. Speed, number of bits, in band signaling, out of band signaling, parity, ect.
Since we plan to use MTP (Media Transfer Protocol) for your next device, we evaluate the use of MTP as replacement for the current (unstable) USB drivers in the current released device.
The limitation on this device is, that its processor (Strong Arm) supports only up to 3 EndPoints:
"Serial port 0 is a universal serial bus device controller (UDC) that supports three endpoints and can operate half-duplex at a baud rate of 12 Mbps (slave only, not a host or hub controller)."
But according to the specification, MTP needs at least 4 endpoints (from the PTP spec):
"The device shall contain at least four endpoints: default, Data-In, Data-Out, and an Interrupt endpoint."
Now the question: Can we just skip the interrupt endpoint on the device? I know that it violates the specification - but what happens if we do?
From our current evaluation software I can see the following scenarios:
The 'space available' is not updated - the user will see that there is 100Mb of free memory, but placing a 1Mb file gives the error "Not Enough Memory"
Non-host driven actions are not visible on the host (so when on the device files are deleted, created or moved, the connected host does not know about it)
If we can live with it, is it advisable to implement it this way?
UPDATE: Damn... when I tested it last time, I ve just removed the code for interrupt-EP data transmission. Now I also removed the endpoint definition (I do not create the endpoint anymore) and from this point the MTP connection couldn't be established any more :(
It seems that the windows driver (wpd) requires the interrupt endpoint - even if it's not used. Bad luck...
Has anyone an idea, whether and how to get MTP working with 3 endpoints?
Finally I got an answer from Microsoft:
The 3-endpoints setup is not supported.
The interrupt endpoint is required so that the driver can receive MTP events from the device. These events are a notification mechanism that the driver relies on to relay events to applications (e.g. when an object is created, updated, or removed).
If your device does nothing with the endpoint (i.e. send no events), applications such as Explorer will not behave correctly whenever objects on your device are changed.
So we buried our plans... :(
On Redhat Linux, I have a multicast listener listening to a very busy multicast data source. It runs perfectly by itself, no packet losses. However, once I start the second instance of the same application with the exactly same settings (same src/dst IP address, sock buffer size, user buffer size, etc.) I started to see very frequent packet losses from both instances. And they lost exact the same packets. If I stop the one of the instances, the remaining one returns to normal without any packet loss.
Initially, I though it is the CPU/kernel load issue, maybe it could not get the packets out of buffer quickly enough. So I did another test. I still keep one instance of the application running. But then started a totally different multicast listener on the same computer but use the second NIC card and listen to a different but even busier multicast source. Both applications run fine without any packet loss.
So it looks like one NIC card is not powerful enough to support two multicast applications, even though they listen to exact the same thing. The possible cause to the packet loss problem might be that, in this scenario, the NIC card driver needs to copy the incoming data to two sock buffers, and this extra copy task is too much for the ether card to handle so it drops packets. Any deeper analysis on this issue and any possible solutions?
Thanks
You are basically finding out that the kernel is inefficient at fan-out of multicast packets. Worst case scenario the code is for every incoming packet allocating two new buffers, the SKB object and packet payload, and copying the NIC buffer twice.
Pick the best case scenario, for every incoming packet a new SKB is allocated but the packet payload is shared between the two sockets with reference counting. Now imagine what happens when two applications, each on their own core and on separate sockets. Every reference to the packet payload is going to cause the memory bus to stall whilst both core caches have to flush and reload, and above that each application is having to kernel context switch back and forth to pass the socket payload. The result is terrible performance.
You aren't the first to encounter such a problem and many vendors have created solutions to it. The basic design is to limit the incoming data to one thread on one core on one socket, then have that thread distribute the data to all other interested threads, preferably using user space code building upon shared memory and lockless data structures.
Examples are TIBCO's Rendezvous and 29 West's Ultra Messaging showing a 660ns IPC bus:
http://www.globenewswire.com/newsroom/news.html?d=194703
1) How can the processor recognize the device requesting the interrupt?
2) Given that different devices are likely to require different ISR, how can the processor obtain the starting address in each case?
3) Should a device be allowed to interrupt the processor while another interrupt is being serviced?
4) How should two or more simultaneous interrupt requests be handled?
1) How can the processor recognize the device requesting the interrupt?
The CPU has several interrupt lines, and if you need more devices than there are lines there's an "interrupt controller" chip (sometimes called a PIC) which will multiplex several devices and which the CPU can interrogate.
2) Given the different devices are likely to require different ISR How can the pressor obtain the starting address in each case?
That's difficult. It may be by convention (same type of device always on the same line); or it may be configured, e.g. in the BIOS setup.
3) Should a device be allowed to interrupt the processor while amother interrupt is being services?
When there's an interrupt, further interrupts are disabled. However the interrupt service routine (i.e. the device-specific code which the CPU is executing) may, if it's willing, reenable interrupts if it's willing to be interrupted.
4) How should two or more simultanement interrupt requests be handled?
Each interrupt has a priority: the higher-priority interrupt is handled first.
The concept of defining the priority among devices so as to know which one is to be serviced first in case of simultaneous requests is called priority interrupt system. This could be done with either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This program then checks with each device if it is the one generating the interrupt. The order of checking is determined by the priority that has to be set. The device having the highest priority is checked first and then devices are checked in descending order of priority.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can request an interrupt in a serial manner. This configuration is governed by the priority of the devices. The device with the highest priority is placed first.