Assuming an I2C network with multiple masters connected and some slaves.
A master is already communicating with an particular slave.
Now if the other master wants to initiate communication by sending some slave address in bus., then how would the new master come to know that the bus is already busy and it should wait until it gets freed up?
All the devices (both masters and slaves) would track the state of the bus.
As soon as the Start condition is found, the bus is considered busy until the Stop is received. Which master exactly owns the bus is decided with the bus arbitration.
The master that had lost the arbitration would report the loss of arbitration (through some flags in register) and will wait until Stop condition found.
If master didn't initiate the transaction, it will report the bus is busy with some other flag, and will continue tracking the state of the bus to detect the Stop condition.
There's no way to tell if the bus is free by sampling a bus lines just once. Yes, if you see some line has 0, that definitely means that the bus is busy, but when you see both lines as 1, that doesn't mean that the bus is free yet. All the devices must track the bus state changes to detect Start/Stop conditions.
In the I2C specification it is stated that: "Detection of START and STOP conditions by devices connected to the bus is easy if they incorporate the necessary interfacing hardware. However, microcontrollers with no such interface have to sample the SDA line at least twice per clock period to sense the transition."
But the document does not elaborate on such hardware. In addition, how to determine the sampling locations to detect the start or stop conditions?
Must the slave agree on the communication rate?
Related
I'm developing a stack layer on microcontroller STM32L433 that uses the CAN protocol; a fundamental part of the stack is the authentication of the devices.
During authentication it can occur that two (or more) devices start to send a CAN message (authentication message) with the same identifier and different payload (true random value). In this case every device should be able to detect if this message was sent first from another device.
I have studied this case and three situations can occur:
the devices start to send message at the same time; in this case only one device is able to sent the message because all others devices detect one error and then abort the transmission.
only one device is able to send the message and occupy the bus before all others devices load the transmission MAILBOX of the CAN peripheral, or before the CAN peripheral of the others devices set the message that is going to be sent in the SCHEDULED state.
In this case, the devices that have not been able to send the message will receive the reception interrupt; within the ISR routine of reception I'm able to abort the transmission.
only one device are able to send the message and occupy the bus and all others CAN peripherals of others devices have message in SCHEDULED state and are waiting that bus become idle.
In this case the devices that have not been able to send the message will receive the reception interrupt. Also in this situation I thought to stop the transmission within the ISR routine of reception (like situation 2) ), but I'm not sure that this is guaranteed for all messages because if the CAN peripheral sets the message that is going to be sent in the TRANSMIT state before the code inside ISR is executed, the operation of abort will have no effect.
My question is (related to the situation 3): Is the message in the transmission MAILBOX in the SCHEDULED state set in the TRANSMISSION state after that the code in the receiving ISR routine is executed or is this thing not guaranteed?
To answer on your third case first, no it is not guaranteed that your message is not on the bus, while receiving. Because interrupts might have some latency too, and within this time, the mailbox might be able to go ahead with transmission.
Your "authentication" also sounds a bit troublesome, since nobody from outside could also actually decide which ECU was actually the one that won the arbitration and actually sent that specific message.
We have ECUs in vehicles which decide at runtime, according to certain methods, where they are mounted by pin and some CAN reception, but only in listen mode. TX is actually disabled in the stack. After that, detection has completed, we switch configurations and restart the communications stack and further initialize the software going up.
But these "setups" are usually defined beforehand, e.g. due to master/slave (vehicle/private bus communication), or maybe some connector pins connected to GND / OPEN / UBAT, or maybe some bus message which tells on which bus it is on.
That seems to be more reliable than your method.
I am working on an embedded project which includes two half duplex UARTS, and one full duplex UART.
UART1 is connected to Device A. UART2 is connected to Device B, and UART3 is connected to the PC. UART1 and UART2 are half-duplex, thus RX/TX modes have to be configured properly.
When a signal on UART1 is triggered, UART2 fetches some data from Device B. That data is put into a buffer, and then transmitted back to UART1, AND UART3. Device A consumes the data, and sends more items on UART1, which then has to be passed to UART2 for Device B to respond.
I was thinking about an efficient state machine that can handle the switching modes between TX/RX mode, and so far my UART code is interrupt driven. What would be some ways to tackle the flow of this program?
I don't think you will need a state machine for this case. Why not just hook up all interrupts accordingly and just forward anything received from one devivce to the other(s)?
You may want to include a TX (ring-)buffer to accomodate for different speeds of each UART and then just have a RX-ISR write the data received to the appropriate TX buffer(s), from where it will then be consumed by the other UARTs' UDRE-ISRs.
I am using several XBee Zigbee with some Arduino modules (or microcontrollers, Arduino is not mandatory). I configured my XBees in AT/transparent mode.
I need to broadcast information: when one module is touched, every other module must react at the same time and immediately.
Unfortunately, if I have good speed results in unicast mode, there are lots of latencies in broadcast mode. It is something known and documented, see XBee ZigBee Addressing.
No data is lost, but they are sometimes buffered for a few seconds by an XBee before being sent again or delivered to my Arduino.
It seems it is not a configuration problem, it is the way the broadcast protocol work. Any idea on how I could speed-up the process?
The only one I have would be to use the API mode, to make each Arduino keep a list of the XBee addresses, and unicast information to the list of these addresses... but I lose the comfort of the broadcasting method, and I cannot easily add a new module without updating every Arduino.
Transmitting data using broadcast addressing with XBee ZB modules will generally give you much, much less performance than transmitting an individual unicast to each node you want to talk to. This is because broadcasting works very differently on the XBee ZB modules than with the XBee 802.15.4 modules.
When you send a broadcast with the XBee 802.15.4 modules, a single 802.15.4 frame is transmitted to the network and all the nodes that can hear the transmission pick it up and send the information out of their serial UARTs. The 802.15.4 network is a simple star network and no implicit repeating of the broadcast is performed by any of the nodes on the network. With XBee ZB, this is different. The XBee ZB modules are acting in a mesh topology and need to repeat the information to the other nodes that are out of range of the original transmission.
When you send a broadcast with the XBee ZB modules, each node that receives the broadcast will re-broadcast it 3 times, causing a lot of data to be transmitted between nodes. Additionally, there can only be a certain number of broadcasts which are "live" on a network at any given time. This often surprises people into thinking that the network is dropping their data when in fact the XBee is rejecting the transmission request.
Unless you are sending data very infrequently--perhaps a broadcast once per minute or more slowly--it is often better to follow this procedure:
Built a list of all nodes by performing a network discovery or collecting route record packets by enabling the AR feature
Send a unicast to each node you wish to transmit to
If you're sending information to a nodes on a large ZB network (i.e. greater than 30 notes) you may want to read this article: Large Networks and Source Routing
I don't think you can optimize it much more, unless only some of the modules need to receive the message. In this case you could use a multicast (might only be available with Xbee 2) instead of a broadcast, which would bring some very minor improvement on the overall speed of your network when it grows big enough (greater than 16 nodes, i.e. the basic routing table).
Have you tried a comparison between unicast, multicast and broadcast? It may be that making a dozen unicasts is on average faster or at least more reliable, especially if you have many hops in your specific network (ex: a 12-node network with 8 hops).
With unicast you can get a confirmation or ack so you know the overall time and success of the operation, and whether you need to retry or not.
On Redhat Linux, I have a multicast listener listening to a very busy multicast data source. It runs perfectly by itself, no packet losses. However, once I start the second instance of the same application with the exactly same settings (same src/dst IP address, sock buffer size, user buffer size, etc.) I started to see very frequent packet losses from both instances. And they lost exact the same packets. If I stop the one of the instances, the remaining one returns to normal without any packet loss.
Initially, I though it is the CPU/kernel load issue, maybe it could not get the packets out of buffer quickly enough. So I did another test. I still keep one instance of the application running. But then started a totally different multicast listener on the same computer but use the second NIC card and listen to a different but even busier multicast source. Both applications run fine without any packet loss.
So it looks like one NIC card is not powerful enough to support two multicast applications, even though they listen to exact the same thing. The possible cause to the packet loss problem might be that, in this scenario, the NIC card driver needs to copy the incoming data to two sock buffers, and this extra copy task is too much for the ether card to handle so it drops packets. Any deeper analysis on this issue and any possible solutions?
Thanks
You are basically finding out that the kernel is inefficient at fan-out of multicast packets. Worst case scenario the code is for every incoming packet allocating two new buffers, the SKB object and packet payload, and copying the NIC buffer twice.
Pick the best case scenario, for every incoming packet a new SKB is allocated but the packet payload is shared between the two sockets with reference counting. Now imagine what happens when two applications, each on their own core and on separate sockets. Every reference to the packet payload is going to cause the memory bus to stall whilst both core caches have to flush and reload, and above that each application is having to kernel context switch back and forth to pass the socket payload. The result is terrible performance.
You aren't the first to encounter such a problem and many vendors have created solutions to it. The basic design is to limit the incoming data to one thread on one core on one socket, then have that thread distribute the data to all other interested threads, preferably using user space code building upon shared memory and lockless data structures.
Examples are TIBCO's Rendezvous and 29 West's Ultra Messaging showing a 660ns IPC bus:
http://www.globenewswire.com/newsroom/news.html?d=194703
1) How can the processor recognize the device requesting the interrupt?
2) Given that different devices are likely to require different ISR, how can the processor obtain the starting address in each case?
3) Should a device be allowed to interrupt the processor while another interrupt is being serviced?
4) How should two or more simultaneous interrupt requests be handled?
1) How can the processor recognize the device requesting the interrupt?
The CPU has several interrupt lines, and if you need more devices than there are lines there's an "interrupt controller" chip (sometimes called a PIC) which will multiplex several devices and which the CPU can interrogate.
2) Given the different devices are likely to require different ISR How can the pressor obtain the starting address in each case?
That's difficult. It may be by convention (same type of device always on the same line); or it may be configured, e.g. in the BIOS setup.
3) Should a device be allowed to interrupt the processor while amother interrupt is being services?
When there's an interrupt, further interrupts are disabled. However the interrupt service routine (i.e. the device-specific code which the CPU is executing) may, if it's willing, reenable interrupts if it's willing to be interrupted.
4) How should two or more simultanement interrupt requests be handled?
Each interrupt has a priority: the higher-priority interrupt is handled first.
The concept of defining the priority among devices so as to know which one is to be serviced first in case of simultaneous requests is called priority interrupt system. This could be done with either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This program then checks with each device if it is the one generating the interrupt. The order of checking is determined by the priority that has to be set. The device having the highest priority is checked first and then devices are checked in descending order of priority.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can request an interrupt in a serial manner. This configuration is governed by the priority of the devices. The device with the highest priority is placed first.