I2C: can multiple I2C errors occur simultaneously? - embedded

I write the driver for the I2C protocol, target microcontroller is STM32F413ZH.
Don't ask me why I write my own driver (this is the project requirement).
I want to create simple public API returning error state, but I wonder whether multiple I2C errors can occur at the same time. If yes, my API cannot return just single enum type, but it should return something more complex like the structure consisting of bit fields of bool type or something else.
Anyway, the main question is:
Can multiple I2C errors occur simultaneously (at the same time)?

number of I2C errors is limited (limited by the number of bits in the status register).
More than one flag can be raised by the I2C hardware so usually I use bits for the particular error enum
typedef enum
{
I2C_OK = 0,
I2C_ERROR1 = 1 << 0,
I2C_ERROR2 = 1 << 1,
I2C_ERROR3 = 1 << 2,
I2C_ERROR4 = 1 << 3,
/* other */
}I2C_ERRORS_ENUMS;

Related

(STM32) Erasing flash and writing to flash gives HAL_FLASH_ERROR_PGP error (using HAL)

Trying to write to flash to store some configuration. I am using an STM32F446ze where I want to use the last 16kb sector as storage.
I specified VOLTAGE_RANGE_3 when I erased my sector. VOLTAGE_RANGE_3 is mapped to:
#define FLASH_VOLTAGE_RANGE_3 0x00000002U /*!< Device operating range: 2.7V to 3.6V */
I am getting an error when writing to flash when I use FLASH_TYPEPROGRAM_WORD. The error is HAL_FLASH_ERROR_PGP. Reading the reference manual I read that this has to do with using wrong parallelism/voltage levels.
From the reference manual I can read
Furthermore, in the reference manual I can read:
Programming errors
It is not allowed to program data to the Flash
memory that would cross the 128-bit row boundary. In such a case, the
write operation is not performed and a program alignment error flag
(PGAERR) is set in the FLASH_SR register. The write access type (byte,
half-word, word or double word) must correspond to the type of
parallelism chosen (x8, x16, x32 or x64). If not, the write operation
is not performed and a program parallelism error flag (PGPERR) is set
in the FLASH_SR register
So I thought:
I erased the sector in voltage range 3
That gives me 2.7 to 3.6v specification
That gives me x32 parallelism size
I should be able to write WORDs to flash.
But, this line give me an error (after unlocking the flash)
uint32_t sizeOfStorageType = ....; // Some uint I want to write to flash as test
HAL_StatusTypeDef flashStatus = HAL_FLASH_Program(TYPEPROGRAM_WORD, address++, (uint64_t) sizeOfStorageType);
auto err= HAL_FLASH_GetError(); // err == 4 == HAL_FLASH_ERROR_PGP: FLASH Programming Parallelism error flag
while (flashStatus != HAL_OK)
{
}
But when I start to write bytes instead, it goes fine.
uint8_t *arr = (uint8_t*) &sizeOfStorageType;
HAL_StatusTypeDef flashStatus;
for (uint8_t i=0; i<4; i++)
{
flashStatus = HAL_FLASH_Program(TYPEPROGRAM_BYTE, address++, (uint64_t) *(arr+i));
while (flashStatus != HAL_OK)
{
}
}
My questions:
Am I understanding it correctly that after erasing a sector, I can only write one TYPEPROGRAM? Thus, after erasing I can only write bytes, OR, half-words, OR, words, OR double words?
What am I missing / doing wrong in above context. Why can I only write bytes, while I erased with VOLTAGE_RANGE_3?
This looks like an data alignment error, but not the one related with 128-bit flash memory rows which is mentioned in the reference manual. That one is probably related with double word writes only, and is irrelevant in your case.
If you want to program 4 bytes at a time, your address needs to be word aligned, meaning that it needs to be divisible by 4. Also, address is not a uint32_t* (pointer), it's a raw uint32_t so address++ increments it by 1, not 4. As far as I know, Cortex M4 core converts unaligned accesses on the bus into multiple smaller size aligned accesses automatically, but this violates the flash parallelism rule.
BTW, it's perfectly valid to perform a mixture of byte, half-word and word writes as long as they are properly aligned. Also, unlike the flash hardware of F0, F1 and F3 series, you can try to overwrite a previously written location without causing an error. 0->1 bit changes are just ignored.

Why should I use & in this syntax? Problem with SPI register

I'm writing a program for SPI communication betweend LPC2109/2 and MCP4921. This is an assignment
on studies. My tutor ask me a question why "&" is necessary in this line? In this line we wait for the end of SPI transmission. Which answer should be right?
#define SPI_SPIF_bm (1<<7)
...
while((S0SPSR & SPI_SPIF_bm) == 0){}
We use "&" as logic AND, for instance: (0000 & 1000) gives us 0000 instead of (0000 | 1000) gives us 1000.
Can I use only this line of code: while((S0SPSR) == 0){}? In my opinion - no. We need to compare value in register S0SPSR with bit SPIF SPI_SPIF_bm.
Is there maybe different solution?
Attachment
User Manual for LPC2129/01: https://www.nxp.com/docs/en/user-guide/UM10114.pdf
The SPI peripheral of LPC2109/2 sets different bits of S0SPSR depending on the actual event that happens which may depend on external circumstances. For example if there's a write collision on the SPI line it sets the WCOL bit instead of SPIF.
If you use while((S0SPSR) == 0){} it will wait until either a successful transaction or an error happens because it will exit the loop if any of the bits of S0SPSR is set.
while((S0SPSR & SPI_SPIF_bm) == 0){} only checks if the transaction has completed successfully. It is a good practice to check the error bits too because in case of an error you would stuck in this loop forever as SPIF is never goint to be set.
For a robust solution I would go with something like this:
while(S0SPSR == 0) {}
if (S0SPSR & SPI_SPIF_bm) { /* SPI_SPIF_bm remains set until data register has not been accessed */
/* Success, read the data register, return data, etc. */
} else {
/* Handle error */
}
If you are interested in the particular type of the error you need to store S0SPSR in a variable in each cycle as those bits are cleared on reading S0SPSR. Also you should to add a counter or a more sophisticated timeout solution to the loop to exit if none of the flags are sets in a reasonable period.
You might think these errors would never happen because you have a simple circuit but they do happen in real life and it's worth doing proper error handling.

Read variable length messages over SPI using Low Level (LL) api on STM32 MCU

My system is composed by an STM32NUCLEO board and a slave device connected over SPI. The slave device sends commands with a variable length: possible lengths are 4, 8, 10, 14 bits.
I'm trying to detect these messages on my nucleo board using the LL APIs and interrupts.
The solution I'm currently working on is based on setting the SPI with a data-width of 4 bits (SPI_InitStruct.DataWidth = LL_SPI_DATAWIDTH_4BIT) and then counting the number of words (1 word = 4 bits) that I receive. In this way, if I receive 1 word then it means that I have received a 4 bit command, 2 word --> 8 bit command. If I receive 3 words, it should mean that I have received a 10bit command (2 bits should be discarded), and so on.
Unfortunately, I have noticed that the LL APIs provides functions only for reading 8 bits or 16 bits at a time and currently I'm having issue in receiving a 4 bit command, since the function LL_SPI_ReceiveData8 expects to receive 8 bits.
Here is my implementation for the IRQ handler and for the callback:
IRQ Handler:
void SPI1_IRQHandler(void)
{
/* Check RXNE flag value in ISR register */
if(LL_SPI_IsActiveFlag_RXNE(SPI1))
{
/* Call function Slave Reception Callback */
SPI1_Rx_Callback();
}
/* Check STOP flag value in ISR register */
else if(LL_SPI_IsActiveFlag_OVR(SPI1))
{
/* Call Error function */
SPI1_TransferError_Callback();
}
}
Callback
void SPI1_Rx_Callback(void)
{
/* Read character in Data register.
RXNE flag is cleared by reading data in DR register */
aRxBuffer[ubReceiveIndex++] = LL_SPI_ReceiveData8(SPI1);
}
As said before in my opinion, the problem seems that I'm using the LL_SPI_ReceiveData8 function to read since I could not find something like LL_SPI_ReceiveData4.
Do you have some suggestions?
Furthermore, is it possible to set the SPI to use 2 bit datawidth instead of 4? Something like SPI_InitStruct.DataWidth = LL_SPI_DATAWIDTH_2BIT: in this way it should be easier to detect the commands since 4, 8, 10 and 14 are multiples of 2.
Thank you.
With the new information about the used controller:
It supports SPI data transfer length between 4 and 16 bit. So your fist try seems not so bad.
Your "problem" is that there is no 4 bit read function. This is caused by the receive data register that will always contain 16 bit but there are only 4 bit valid data in your case. the other bits are '0'.
Your callback function
aRxBuffer[ubReceiveIndex++] = LL_SPI_ReceiveData8(SPI1);
will write values from 0..15 to the aRxBuffer and you don't need a
ReceiveData4() to get your answer :-)
So also the Reference manual for the STM32L4 series Reference Manual at page 1193ff.
The minimal addresable chunk of data is byte. So even if you receive the 4 bits the read value is 8 bits.
BTW wht is this secret slave device which have varing word length?

Using LWIP SNMP, errors occur when the second snmp_vabind_alloc is called

I'm trying to send a proper trap using the Light Weight Internet Protocol (LWIP) SNMP.
The SNMP Wiki states, a proper trap should have
a current sysUpTime value binding
an OID identifying the type of trap binding
an optional variable binding
However it errs with vb->value != NULL when the second snmp_varbind_alloc is called.
When only the variable binding is sent, and none other, the trap is sent to the Network Management Station ok.
If the structure is defined in RAM and the fields populated, effectively doing the allocate manually, then I can get two bindings to go out. It will single step but not run. So, now I need to look at making sure the RAM structures exists when it is being sent out, before I destroy them. So, I can add a delay, which is not ideal, or find a function which will tell me when the trap has been sent, so I can move on. I'm hesitant in posting code which doesnt work. When (if) I get it working, then I will show the code.
Here is the code for 3 bindings with opt.h changed from:
#define MEMP_NUM_SNMP_VALUE 3
to:
#define MEMP_NUM_SNMP_VALUE 9
struct snmp_obj_id sysupid = {9,{1,3,6,1,2,1,1,3,0}};
struct snmp_obj_id trapoid = {11,{1,3,6,1,6,3,1,1,4,1,0}};
struct snmp_obj_id pttnotifyoid = {8,{1,3,6,1,4,SNMP_ENTERPRISE_ID,3,18}};
static unsigned char trapOID[10] = { 0x2b, 6, 1, 4, 1, 0x82, 0xe4, 0x3d, 3, 18};
struct snmp_varbind *vb1, *vb2, *vb3;
u32_t *u32ptr, sysuptime;
void vSendTrapTaskDemo( void ){
snmp_varbind_list_free(&trap_msg.outvb);
vb1 = snmp_varbind_alloc(&sysupid,SNMP_ASN1_TIMETICKS, 4);
snmp_get_sysuptime(&sysuptime);
vb1->value_len=4;
vb1->value_type=0x43; //Timerticks
u32ptr=vb1->value;
*u32ptr=sysuptime;
snmp_varbind_tail_add(&trap_msg.outvb,vb1);
vb2 = snmp_varbind_alloc(&trapoid,SNMP_ASN1_OBJ_ID, 11);
memcpy (vb2->value, trapOID, 10);
snmp_varbind_tail_add(&trap_msg.outvb,vb2);
vb3 = snmp_varbind_alloc(&pttnotifyoid, SNMP_ASN1_COUNTER, 4);
vb3->value_len=4;
vb3->value_type=0x02; //Integer32
u32ptr=vb3->value;
*u32ptr=1;
snmp_varbind_tail_add(&trap_msg.outvb,vb3);
snmp_send_trap(SNMP_GENTRAP_ENTERPRISESPC, &sysupid,18);
snmp_varbind_list_free(&trap_msg.outvb);
}
The second binding has issues.
The value of the OID is 0 (itu-t) when it should be:
1.3.6.1.4.1.45629.3.18
However, since level 1 needs only one binding, I'm going to forget the 3 binding method for now, until told that level 2 is needed.
Your question has been posted a while ago, but I had the same issue as you, and couldn't find an answer...
I'm using LWIP on a STM32F107, and was completely unable to add a second varbind to my traps...
The solution was to increase the HEAP size of my µcontroller.
When using STM32CubeMX, it's located (for me) line 61 of the startup_stm32f107xc.s file, and has a default value of 0x200 (512 Bytes), I simply doubled that to be 0x400.
; <h> Heap Configuration
; <o> Heap Size (in Bytes) <0x0-0xFFFFFFFF:8>
; </h>
Heap_Size EQU 0x400
I hope this will help whoever is trying to use LWIP!

How I can fix this code to allow my AVR to talk over serial port?

I've been pulling my hair out lately trying to get an ATmega162 on my STK200 to talk to my computer over RS232. I checked and made sure that the STK200 contains a MAX202CPE chip.
I've configured the chip to use its internal 8MHz clock and divided it by 8.
I've tried to copy the code out of the data sheet (and made changes where the compiler complained), but to no avail.
My code is below, could someone please help me fix the problems that I'm having?
I've confirmed that my serial port works on other devices and is not faulty.
Thanks!
#include <avr/io.h>
#include <avr/iom162.h>
#define BAUDRATE 4800
void USART_Init(unsigned int baud)
{
UBRR0H = (unsigned char)(baud >> 8);
UBRR0L = (unsigned char)baud;
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
UCSR0C = (1 << URSEL0) | (1 << USBS0) | (3 << UCSZ00);
}
void USART_Transmit(unsigned char data)
{
while(!(UCSR0A & (1 << UDRE0)));
UDR0 = data;
}
unsigned char USART_Receive()
{
while(!(UCSR0A & (1 << RXC0)));
return UDR0;
}
int main()
{
USART_Init(BAUDRATE);
unsigned char data;
// all are 1, all as output
DDRB = 0xFF;
while(1)
{
data = USART_Receive();
PORTB = data;
USART_Transmit(data);
}
}
I have commented on Greg's answer, but would like to add one more thing. For this sort of problem the gold standard method of debugging it is to first understand asynchronous serial communications, then to get an oscilloscope and see what's happening on the line. If characters are being exchanged and it's just a baudrate problem this will be particularly helpful as you can calculate the baudrate you are seeing and then adjust the divisor accordingly.
Here is a super quick primer, no doubt you can find something much more comprehensive on Wikipedia or elsewhere.
Let's assume 8 bits, no parity, 1 stop bit (the most common setup). Then if the character being transmitted is say 0x3f (= ascii '?'), then the line looks like this;
...--+ +---+---+---+---+---+---+ +---+--...
| S | 1 1 1 1 1 1 | 0 0 | E
+---+ +---+---+
The high (1) level is +5V at the chip and -12V after conversion to RS232 levels.
The low (0) level is 0V at the chip and +12V after conversion to RS232 levels.
S is the start bit.
Then we have 8 data bits, least significant first, so here 00111111 = 0x3f = '?'.
E is the stop (e for end) bit.
Time is advancing from left to right, just like an oscilloscope display, If the baudrate is 4800, then each bit spans (1/4800) seconds = 0.21 milliseconds (approx).
The receiver works by sampling the line and looking for a falling edge (a quiescent line is simply logical '1' all the time). The receiver knows the baudrate, and the number of start bits (1), so it measures one half bit time from the falling edge to find the middle of the start bit, then samples the line 8 bit times in succession after that to collect the data bits. The receiver then waits one more bit time (until half way through the stop bit) and starts looking for another start bit (i.e. falling edge). Meanwhile the character read is made available to the rest of the system. The transmitter guarantees that the next falling edge won't begin until the stop bit is complete. The transmitter can be programmed to always wait longer (with additional stop bits) but that is a legacy issue, extra stop bits were only required with very slow hardware and/or software setups.
I don't have reference material handy, but the baud rate register UBRR usually contains a divisor value, rather than the desired baud rate itself. A quick google search indicates that the correct divisor value for 4800 baud may be 239. So try:
divisor = 239;
UBRR0H = (unsigned char)(divisor >> 8);
UBRR0L = (unsigned char)divisor;
If this doesn't work, check with the reference docs for your particular chip for the correct divisor calculation formula.
For debugging UART communication, there are two useful things to do:
1) Do a loop-back at the connector and make sure you can read back what you write. If you send a character and get it back exactly, you know that the hardware is wired correctly, and that at least the basic set of UART register configuration is correct.
2) Repeatedly send the character 0x55 ("U") - the binary bit pattern 01010101 will allow you to quickly see the bit width on the oscilloscope, which will let you verify that the speed setting is correct.
After reading the data sheet a little more thoroughly, I was incorrectly setting the baudrate. The ATmega162 data sheet had a chart of clock frequencies plotted against baud rates and the corresponding error.
For a 4800 baud rate and a 1 MHz clock frequency, the error was 0.2%, which was acceptable for me. The trick was passing 12 to the USART_Init() function, instead of 4800.
Hope this helps someone else out!