I am looking into the I2C protocol for PIC16F88X.
What I would like to do, is to enable an I2C slave to either ACK or NACK depending on the data received on the I2C.
The PIC can ACK or NACK on the I2C address sent on the line, but from what I've read it will always ACK on the subsequent received bytes. Is that correct?
In the following communication:
Start - I2c_Addr+write/ACK - Register_value/Nack
I'd like the slave to be able to Ack or Nack depending on the value in Register_value. If the slave does not understand Register_value, it should not Ack.
Could someone please either confirm that this is not possible, or tell me how to do it?
Assuming your using the MSSP peripheral
short answer: What your asking for is not probably possible with a PIC, at least without bit banging I/O lines. The reason is that ack / nack is checked on the 9th clock edge and the SSPIF interrupt is not fired until the end of the 9th clock. You could attempt to repeatedly check the BF bit as it is set as soon as the data byte is shifted into the I/O register(8th clock). If you can pull off a comparison and set the SSPOV bit before the 9th clock cycle this should generate a NACK, this is quite sketchy if you have any interrupts running.
longer answer: it sounds like your attempting to validate if the data byte the slave receives is valid or not using ack. personally i wouldn't do this, ack is to signal the integrity of the line, not verify data integrity. If the device is a slave the master by definition must know exactly how it is to work and can check the validity of the byte before pushing it out on the I2C lines. In such cases i assume you also have control over the I2C master's code, use one common header file that defines all the commands or valid data bytes that can be sent to avoid mismatches in the code.
If you must guarantee the proper byte was sent for some reason, have the master ask the slave for a response byte, have the slave return a code indicating the result of the previous transfer.
If your intent is to guarantee the integrity of the I2C line, none of these approaches really work. Your only option would be to send a bulk of a bytes at boot or periodically with a CRC and check that it matches on the slave. Generally I2C lines will either work or not, they are low speed, generally have short traces and have high allowable bus capacitance, if they arn't working you won't see any ack's at all.
My guess is no IF the I2C hardware is built-in to the PIC. All of the hardware solutions I've worked with have a state machine that can't help but ACK the second byte unless there's something wrong with the transmission (missing a bit for instance). You'd be better off making your own I2C implementation in software with bit-banging and an open-collector buffer for the ACK. Then you can do anything you want. It won't be I2C standard, so watch out if you put any devices on the bus that aren't working to your specifications. I'm not sure offhand but I think for any standard I2C device if it doesn't receive an ACK it may retransmit the data or just fault since it isn't sure who has control of the bus after a failure (signified by a NAK).
Related
I'm developing a USB device driver for a microcontroller (Atmel/Microchip SAMD21, but I think the question is a general one). I need multiple endpoints for control & data, and the USB hardware uses per-endpoint descriptors to (among other things) locate buffers for input and output data.
Since IN data is polled at the host's discretion it makes sense that each endpoint has its own IN buffer, so that any endpoint's data (if it has any to send) is immediately available when polled.
But as far as incoming data from SETUP & OUT transactions is concerned, it occurs to me that I can save memory by configuring all endpoints to use a shared buffer. It seems wasteful for each endpoint to have its own buffer when, given the nature of USB transactions, only one such transaction can occur at a time.
Obviously this approach requires that transaction interrupts are handled sufficiently quickly that the shared buffer is freed and prepared for a new transaction in time for whatever the next transaction might be - but this is already a requirement for the control endpoint, where some SETUP transactions are immediately followed by an OUT.
So, assuming the timing is feasible, is there any other reason why such an approach wouldn't work?
Probably not.
Normally, the USB module on a microcontroller handles OUT packets by keeping track of which packet buffers it has written data to, and it waits for your firmware to say it is done processing the buffer before accepting more data from the computer and overwriting the buffer. If an endpoint has no buffers available to receive more data, but the computer sends an OUT packet to the endpoint, the USB module typically responds to the computer with a NAK packet, which tells the computer it should retry later. In this situation, your firmware can take pretty much as long as it wants to handle the OUT packets.
By having multiple endpoints configured to use the same buffer, you mess up this system. When you receive an OUT packet on any of your endpoints, the USB module would (probably) not know that multiple endpoints use the same buffer, so it would not issue NAK packets on your other OUT endpoints. If it receives another OUT packet right away, it would write it to the same buffer, overwriting the previous packet. Therefore, whenever you receive a packet, your code would have to rush as fast as it can to do something like copying the data out of that buffer, disabling other OUT endpoints, or reassigning buffers.
Even if you can actually get this to work, it means that your scheme to save a little bit of memory turns the servicing of USB events into a real-time task (i.e. a task that requires responses from your code in a few microseconds). If you want to add another real-time task to your system later, it will be very difficult, because you always have to be ready to be interrupted by your USB-handling code.
The SAMD21 has tons of memory (32 KB) so you probably don't need to worry about optimizing this part of it.
I agree with David's Response. You didn't mention the speed of the device you are creating. A low-speed would need just a few 8-byte buffers. A full-speed, a few 64-byte buffers. High-speed, maybe eight 64-byte buffers, depending on your use. A super-speed device, your still only talking a few 512-byte buffers.
I would create a ring buffer for each endpoint. This way you are not moving data around. You are simply using a pointer that points to an entry within a memory ring. A full-speed device with a control endpoint, an interrupt endpoint, and two bulk endpoints, each endpoint having sixteen 64-byte entries per ring, is still only a total of 4k RAM, 1/8th of the total RAM.
However, I am not familiar with the SAMD21, so please check the specification to be sure this will work.
If yes, how does the hardware know if there's really data or not since the level on the MOSI/MISO line never changes.
I assume that with SPI you mean Serial Peripheral Interface.
According to wikipedia, the data is sent like this:
During each SPI clock cycle, a full duplex data transmission occurs. The master sends a bit on the MOSI line and the slave reads it, while the slave sends a bit on the MISO line and the master reads it. This sequence is maintained even when only one-directional data transfer is intended.
(https://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus#Data_transmission).
Because it works with a clock, sending (only) zeros should work.
Besides that, with something like the Manchester code, the MOSI/MISO line would change. I'm not sure if this can be used with the Serial Peripheral Interface though.
I'm using PIC18F87J11 as the master and LiPower Shield as the slave, and all I want to do is to be able to read the battery voltage value from the LiPower Shield. I'm using MPLAB C18 libraries for the I2C communication. I'm not able to get correct readings as I think the communication between the two devices is not setup correctly.
I'm looking for interpretations of the waveform signals in order to detect the issue. Also I would like to know if I'm missing something in the code. Any recommendations to improve the code would be helpful.
The LiPower Shield came with a sample code for Arduino but I'm using PIC18 chip from Microchip. The sample code is found here.
Here is the signals I'm getting while trying to read the battery voltage.
Code:
SSP2ADD = 19;
OpenI2C2(MASTER,SLEW_OFF);
StartI2C2();
IdleI2C2();
WriteI2C2(0x36);
IdleI2C2();
data = ReadI2C2(); // Read byte of data
printf ("\r\nAddress 32");
printf (" Byte:");
PrintChar(data);
IdleI2C2();
AckI2C2();
IdleI2C2();
WriteI2C2(0x02);
IdleI2C2();
data = ReadI2C2(); // Read byte of data
printf ("\r\nAddress 02");
printf (" Byte:");
PrintChar(data);
IdleI2C2();
AckI2C2();
StopI2C2(); // Stop condition I2C on bus
Output: Which I think is wrong.
Address 32 Byte:FF
Address 02 Byte:FF
I'm not really sure if I'm writing/reading from the correct address, but that's the address they used in the their sample code. I hope I can get some interpretations on the signals and feedback on the code if possible.
I'm not familiar with the PIC but your code looks nowhere near right. Per the MAX17043DS datasheet, page 12, a memory read must consist of the following:
I2C start condition
Write device I2C write address (0x6C)
Check for ACK from slave
Write 8-bit memory address
Check for ACK from slave
I2C repeated start condition
Write device I2C read address (0x6D)
Read first byte of data
Send ACK
Read second byte of data
Send NACK
I2C Stop condition
What I see in your code is an I2C Start condition followed by a write 0f 0x36. Since this is not the device address the slave recognizes, it sends a NACK (as seen on your logic analyzer traces) and ignores everything else.
This question and answer has a lot of information on I2C on a PIC18. You also should probably find a read a basic I2C tutorial.
I've got a sim connected to my microcontroller. The rst, i/o, and clck pins are wired correctly. There is a hardware UART on my board, but since it is full-duplex and not half, I've jumperd RX/TX together.
So far, I toggle RST according to ISO-7816, and my UART buffer fills up with the ATR the sim card responds with. Once I've received the ATR, I change the UART to TX mode and send it a PPS. After sending, I change the UART back to RX only mode. It follows the correct format as stated in ISO-7816, but I do not receive the confirmation bytes from the sim. The confirmation is supposed to be a repeat of the settings I sent.
I suppose your problem is of the same origin as I had with gsm modems.
Sending a command you get an acknowledgement from the device, then send the next command, get ack, etc, etc. Soon or later the device hangs up.
The key is the interpretation of the acknowledgement.
You may think the acknowledgement means the command is accepted AND executed. However - at least at ALL gsm modems I know - it means no more but the command was accepted and INTERPRETED - but not executed. In case of time consuming commands you send your next command during previous command is being executed. You do it because you may think acknowledgement means the command is done - but it is not true.
The device may or may not buffering cumulative commands, but soon or later the device runs out of resources and hangs up.
I have no experience with device you use but the phenomena seems to be the same.
While I'm not a protocol expert, the most likely cause seems to me, that you send PPS too early- "after sending" can be easily too early on modern microcontrollers. ISO 7816-3 states, that the guard time applies as usual and the waiting time is 9600 etu's.
Sending PPS too early means, that the card does not yet listen, which perfectly explains receiving no response at all. Wrong format would cause an error block, which should also be visible on the scope, which supports my assumption.
I am writing code for a USB device. Suppose the USB host starts a control read transfer to read some data from the device, and the amount of data requested (wLength in the Setup Packet) is a multiple of the Endpoint 0 max packet size. Then after the host has received all the data (in the form of several IN transactions with maximum-sized data packets), will it initiate another IN transaction to see if there is more data even though there can't be more?
Here's an example sequence of events that I am wondering about:
USB enumeration process: max packet size on endpoint 0 is reported to be 64.
SETUP-DATA-ACK transaction starts a control read transfer, wLength = 128.
IN-DATA-ACK transaction delivers first 64 bytes of data to host.
IN-DATA-ACK transaction delivers last 64 bytes of data to host.
IN-DATA-ACK with zero-length DATA packet? Does this transaction ever happen?
OUT-DATA-ACK transaction completes Status Phase of the transfer; transfer is over.
I tested this on my computer (Windows Vista, if it matters) and the answer was no: the host was smart enough to know that no more data can be received from the device, even though all the packets sent by the device were full (maximum size allowed on Endpoint 0). I'm wondering if there are any hosts that are not smart enough, and will try to perform another IN transaction and expect to receive a zero-length data packet.
I think I read the relevant parts of the USB 2.0 and USB 3.0 specifications from usb.org but I did not find this issue addressed. I would appreciate it if someone can point me to the right section in either of those documents.
I know that a zero-length packet can be necessary if the device chooses to send less data than the host requested in wLength.
I know that I could make my code flexible enough to handle either case, but I'm hoping I don't have to.
Thanks to anyone who can answer this question!
Read carefully USB specification:
The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of
the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
So, in your case, when wLength == transfer size, answer is NO, you don't need ZLP.
In case wLength > transfer size, and (transfer size % ep0 size) == 0 answer is YES, you need ZLP.
In general, USB uses a less-than-max-length packet to demarcate an end-of-transfer. So in the case of a transfer which is an integer multiple of max-packet-length, a ZLP is used for demarcation.
You see this in bulk pipes a lot. For example, if you have a 4096 byte transfer, that will be broken down into an integer number of max-length packets plus one zero-length-packet. If the SW driver has a big enough receive buffer set up, higher-level SW receives the entire transfer at once, when the ZLP occurs.
Control transfers are a special case because they have the wLength field, so ZLP isn't strictly necessary.
But I'd strongly suggest SW be flexible to both, as you may see variations with different USB host silicon or low-level HCD drivers.
I would like to expand on MBR's answer. The USB specification 2.0, in section 5.5.3, says:
The Data stage of a control transfer from an endpoint to the host is
complete when the endpoint does one of the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
When a Data stage is complete, the Host Controller advances to the
Status stage instead of continuing on with another data transaction.
If the Host Controller does not advance to the Status stage when the
Data stage is complete, the endpoint halts the pipe as was outlined in
Section 5.3.2. If a larger-than-expected data payload is received from
the endpoint, the IRP for the control transfer will be
aborted/retired.
I added emphasis to one of the sentences in that quote because it seems to specifically say what the device should do: it should "halt" the pipe if the host tries to continue the data phase after it was done, and it is done if all the requested data has been transmitted (i.e. the number of bytes transferred is greater than or equal to wLength). I think halting refers to sending a STALL packet.
In other words, the device does not need a zero-length packet in this situation and in fact the USB specification says it should not provide one.
You don't have to. (*)
The whole point of wLength is to tell the host the maximum number of bytes it should attempt to read (but it might read less !)
(*) I have seen devices that crash when IN/OUT requests were made at incorrect time during control transfers (when debugging our host solution). So any host doing what you are worried about, would of killed those devices and is hopefully not in the market.