Why does xQueueReceive throw an unhandled exception (LoadProhibited)? - embedded

I'm working on an esp32 FreeRTOS application with two tasks. Its purpose is to take UART messages received from a peripheral device and transmit them via mqtt to a central broker.
The first task reads input from Serial1, processes the contents into a message structure, and adds it to a FreeRTOS queue:
typedef struct {
int length;
char buffer[AZ_EL_MAX_MESSAGE_LENGTH];
} tag_message_t;
void uart_read_task(void * pvParameters){
BaseType_t xStatus;
tag_message_t tag_message;
while(true) {
while(Serial1.available())
{
first_char = Serial1.read();
if (first_char == '+') // Indicates the beginning of a message
{
for(int i = 0; i < AZ_EL_MAX_MESSAGE_LENGTH; i++)
{
message_buffer[i] = Serial1.read();
if (message_buffer[i] == '\n') // End of message received
{
ESP_LOGV(TAG, "Message found: %s", message_buffer);
strncpy(tag_message.buffer, message_buffer, i + 1);
tag_message.length = i + 1;
xStatus = xQueueSend(xMessagesToSendQueue, (void*) &tag_message, 0);
if (xStatus != pdPASS)
ESP_LOGW(TAG, "Failed to queue message.");
break;
}
}
}
}
vTaskDelay(pdMS_TO_TICKS(20)); // Wait the minimum BLE advertisement period for messages to come in, i.e. 20ms
}
}
The main loop() (which is technically the second FreeRTOS task) then attempts to receive from that queue and transmit over MQTT to a local broker:
void setup()
{
Serial.begin(115200);
// Configure and start WiFi
configure_network();
connect_network();
// Configure the MQTT connection
configure_mqtt_client();
// Configure and create the inter-task queues
xMessagesToSendQueue = xQueueCreate(100, sizeof(tag_message_t));
if (xMessagesToSendQueue == NULL) {
ESP_LOGE(TAG, "Unable to create messaging queue. Will not create UART handling message queue.");
delay(10000);
esp_restart();
} else {
ESP_LOGI(TAG, "Messaging queue generated");
configure_uart();
xTaskCreate(uart_read_task, "UART_Processing", 20000, NULL, 1, NULL);
}
}
void loop()
{
const TickType_t xTicksToWait = pdMS_TO_TICKS(100); // milliseconds to wait
tag_message_t received_message;
if (network_connected) {
connect_mqtt_client();
while(mqtt_client.connected())
{
mqtt_client.loop();
// Process messages on the xMessagesToSendQueue
if (xMessagesToSendQueue != NULL)
{
ESP_LOGI(TAG, "Processing message");
if (xQueueReceive(xMessagesToSendQueue, &received_message, xTicksToWait) == pdPASS)
{
ESP_LOGD(TAG, "Received message, transmitting.");
if(!mqtt_client.publish("aoa", received_message.buffer, received_message.length));
ESP_LOGW(TAG, "Failed transmission.");
}
else
{
vTaskDelay(pdMS_TO_TICKS(50));
}
}
else
{
ESP_LOGE(TAG, "Messages queue is null.");
}
}
} else {
ESP_LOGE(TAG, "WARNING Device not connected to the network. Reconnecting.");
connect_network();
}
delay(5000);
}
I've verified that the MQTT broker works, that it connects to WiFi, and it can properly read messages from Serial1. HOWEVER, the xQueueReceive() call in loop() throws a LoadProhibited exception every time it's called.
Can anyone tell me what I'm getting wrong here?

All, thank you for your help. It turns out this wasn't a FreeRTOS issue. After a little research (i.e. reading this and watching a more experienced engineer explain things: [link]https://hackaday.com/2017/08/17/secret-serial-port-for-arduinoesp32/) it turns out ESP32 Serial1 pins are connected to flash memory.
Every time I tried Serial1.read() or Serial1.readBytesUntil() the ESP32 crashed. Turns out reading the flash is taboo?
I replaced '''Serial1.read()''' with '''Serial2.read()''' and others. That fixed everything. Now I'm off to optimizing my queues!

You are trying to receive an address on the Queue without casting it.
2 solutions: You either declare received_message as a Pointer:
tag_message_t* received_message;
...
received_message = new tag_message_t;
if (xQueueReceive(xMessagesToSendQueue, received_message, xTicksToWait) == pdPASS)
and don't forget to delete it right after usage.
Or you can cast it after receiving :
if (xQueueReceive(xMessagesToSendQueue, &received_message, xTicksToWait) == pdPASS)
{
received_message = *(static_cast<tag_message_t*>(&received_message));
ESP_LOGD(TAG, "Received message, transmitting.");
if(!mqtt_client.publish("aoa", received_message.buffer, received_message.length));
ESP_LOGW(TAG, "Failed transmission.");
}
or any other sort of dereferencing you might wanna try.
There is also the possibility xQueueReceive is already doing that! So let's be sure of what is going on by adding this:
ESP_LOGD(TAG, "Received message, transmitting. %s", received_message.buffer);
Right after you get the message.
I don't think tag_message is being deleted inside the task, so if the struct is still valid and present, if you properly parse/cast its address, you should be able to acquire the message without any issues.

Related

Use UART events with Simplelink in Contiki-ng

I'm actually trying to receive serial line message using the UART0 of a CC1310 with Contiki-ng.
I have implemented a uart callback function with will group all the characters received into a single variable, and stop collecting when it receives a "\n" character.
int uart_handler(unsigned char c)
{
if (c == '\n')
{
end_of_string = 1;
index = 0;
}
else
{
received_message_from_uart[index] = c;
index++;
}
return 0;
}
The uart0 is initialised at the main and only process of the system and then it waits in a infinite while loop until the flag end_of_string
PROCESS_THREAD(udp_client_process, ev, data)
{
PROCESS_BEGIN();
uart0_init();
uart0_set_callback(uart_handler);
while (1)
{
//wait for uart message
if (end_of_string == 1 && received_message_from_uart[0] != 0)
{
LOG_INFO("Received mensaje\n");
index = 0;
end_of_string = 0;
// Delete received message from uart
memset(received_message_from_uart, 0, sizeof received_message_from_uart);
}
etimer_set(&timer, 0.1 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
}
PROCESS_END();
}
As you can see, the while loop stops at each iteration with a minimum time timer event. Although this method works, I think it is a bad way to do it, as I have read that there are uart events that can be used, but I have not been able to find anything for the CC1310.
Is it possible to use the CC1310 (or any other simplelink platform) UART with events and stop doing unnecessary iterations until a message has reached the device?

How to put BG96 on power save mode between sending messages to Azure IoT Hub over HTTP

I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(&gtemp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.

Linux-Xenomai Serial Communication using xeno_16550A module

I'm starter of RTOS and I'm using Xenomai v2.6.3.
I'm trying to get some data using Serial communication.
I did my best on the task following the xenomai's guide and open sources, but it doesn't work well.
the link of the guide --> (https://xenomai.org//serial-16550a-driver/)
I just followed the sequence to use the module xeno_16550A. (with port io = 0x2f8 and irq=3)
I followed open source http://www.acadis.org/pages/captain.at/serial-port-example
It works well in write task, but read task doesn't work well.
It gave me the error sentence with error while RTSER_RTIOC_WAIT_EVENT, code -110 (it means connection timed out)
Moreover I checked the irq number3 by typing command 'cat /proc/xenomai/irq', but the interrupt number doesn't increase.
In my case, I don't need to write data, so I erase the write task code.
The read task proc is follow
void read_task_proc(void *arg) {
int ret;
ssize_t red = 0;
struct rtser_event rx_event;
while (1) {
/* waiting for event */
ret = rt_dev_ioctl(my_fd, RTSER_RTIOC_WAIT_EVENT, &rx_event );
if (ret) {
printf(RTASK_PREFIX "error while RTSER_RTIOC_WAIT_EVENT, code %d\n",ret);
if (ret == -ETIMEDOUT)
continue;
break;
}
unsigned char buf[1];
red = rt_dev_read(my_fd, &buf, 1);
if (red < 0 ) {
printf(RTASK_PREFIX "error while rt_dev_read, code %d\n",red);
} else {
printf(RTASK_PREFIX "only %d byte received , char : %c\n",red,buf[0]);
}
}
exit_read_task:
if (my_state & STATE_FILE_OPENED) {
if (!close_file( my_fd, READ_FILE " (rtser)")) {
my_state &= ~STATE_FILE_OPENED;
}
}
printf(RTASK_PREFIX "exit\n");
}
I could guess the causes of the problem.
buffer size or buffer is already full when new data is received.
rx_interrupt doesn't work....
I want to check whether the two things are wrong or not, but How can I check?
Furthermore, does anybody know the cause of the problem? Please give me comments.

GStreamer demo deosn't work in Virtual Machine (seeking simple example)

I am tryign to code an extremely simple GStreamer app. It doesn't matter what it does, so long as GStreamer does something. Even just displaying some text or a simple JPEG would be fine.
Below is about the best example that I could find by Googling (I have added a few error checks). When I run it in a Linux Virtual Machine running under Windows, I see this console message:
libEGL warning: pci id for fd 4: 80ee:beef, driver (null)
libEGL warning: DRI2: failed to open vboxvideo (search paths
/usr/lib/i386-linux-gnu/dri:${ORIGIN}/dri:/usr/lib/dri)
Googling indicates that this is an error with 3D rendering inside a virtual machine. I can find no solution.
So, can someone fix the code below so that it will run in a VM? I assume that that would mean avoiding 3D rendering, so maybe display an image or some text? It is not necessary to play video, this is just a simple proof of concept of using GStreamer inside something else (which has to be running in a VM).
Here's the code ...
void GstreamerPlayVideo()
{
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
int argc;
GError *error = NULL;
/* Initialize GStreamer */
if (gst_init_check(&argc, NULL, &error) == TRUE)
{
/* Build the pipeline */
// Change URL to test failure
pipeline = gst_parse_launch ("bin uri=http://docs.gstreamer.com/media/sintel_trailer-480p.webm", &error);
//// pipeline = gst_parse_launch ("bin uri=http://tecfa.unige.ch/guides/x3d/www.web3d.org/x3d/content/examples/HelloWorld.gif", &error);
if (pipeline != NULL)
{
/* Start playing */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* wait until it's up and running or failed */
if (gst_element_get_state (pipeline, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE)
{
g_error ("GST failed to go into PLAYING state");
exit(1);
}
/* Wait until error or EOS */
bus = gst_element_get_bus (pipeline);
if (bus != NULL)
{
msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);
/* Parse message */
if (msg != NULL)
{
gchar *debug_info;
switch (GST_MESSAGE_TYPE (msg))
{
case GST_MESSAGE_ERROR:
gst_message_parse_error (msg, &error, &debug_info);
g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), error->message);
g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none");
g_clear_error (&error);
g_free (debug_info);
break;
case GST_MESSAGE_EOS:
g_print ("End-Of-Stream reached.\n");
break;
default:
/* We should not reach here because we only asked for ERRORs and EOS */
g_printerr ("Unexpected message received.\n");
break;
}
gst_message_unref (msg);
}
/* Free resources */
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
}
else
{
g_print ("GST get bus error: %s\n", error->message);
exit(1);
}
}
else
{
g_print ("GST parse error: %s\n", error->message);
exit(1);
}
}
else
{
g_print ("GST init error: %s\n", error->message);
exit(1);
}
} // GstreamerPlayVideo()
Try specifying a video sink by hand in your pipeline.
videotestsrc ! ximagesink
Your system may have an EGL video sink plugin installed as the primary video plugin. ximagesink seems a little more likely to work.
Like this:
//this line is where you're creating your pipeline
pipeline = gst_parse_launch ("videotestsrc ! ximagesink", &error);
I recommend experimenting with the gst-launch command first so you can get a hang of pipeline syntax, what sinks and sources are, etc. The simplest test you can run is something like this (if you have gstreamer 1.0 installed, you may have 0.10), from the command line:
gst-launch-1.0 videotestsrc ! autovideosink

No r/w bit made available to firmware by I2C peripheral of STM32F40x chips

I was wondering if anyone has found a way to determine the intention of a master communicating with an stm32f40x chip? From the perspective of the firmware on the stm32f40x chip, the ADDRess sent by the master is not available, and the r/w bit (bit 0 of the address) contained therein is also not available. So how can I prevent collisions? Has anyone else dealt with this? If so what techniques did you use? My tentative solution is below for reference. I delayed any writes to the DR data register until the TXE interrupt occurs. I thought at first this would be too late, and a byte of garbage would be clocked out, but it seems to be working.
static inline void LLEVInterrupt(uint16_t irqSrc)
{
uint8_t i;
volatile uint16_t status;
I2CCBStruct* buffers;
I2C_TypeDef* addrBase;
// see which IRQ occurred, process accordingly...
switch (irqSrc)
{
case I2C_BUS_CHAN_1:
addrBase = this.addrBase1;
buffers = &this.buffsBus1;
break;
case I2C_BUS_CHAN_2:
addrBase = this.addrBase2;
buffers = &this.buffsBus2;
break;
case I2C_BUS_CHAN_3:
addrBase = this.addrBase3;
buffers = &this.buffsBus3;
break;
default:
while(1);
}
// ...START condition & address match detected
if (I2C_GetITStatus(addrBase, I2C_IT_ADDR) == SET)
{
// I2C_IT_ADDR: Cleared by software reading SR1 register followed reading SR2, or by hardware
// when PE=0.
// Note: Reading I2C_SR2 after reading I2C_SR1 clears the ADDR flag, even if the ADDR flag was
// set after reading I2C_SR1. Consequently, I2C_SR2 must be read only when ADDR is found
// set in I2C_SR1 or when the STOPF bit is cleared.
status = addrBase->SR1;
status = addrBase->SR2;
// Reset the index and receive count
buffers->txIndex = 0;
buffers->rxCount = 0;
// setup to ACK any Rx'd bytes
I2C_AcknowledgeConfig(addrBase, ENABLE);
return;
}
// Slave receiver mode
if (I2C_GetITStatus(addrBase, I2C_IT_RXNE) == SET)
{
// I2C_IT_RXNE: Cleared by software reading or writing the DR register
// or by hardware when PE=0.
// copy the received byte to the Rx buffer
buffers->rxBuf[buffers->rxCount] = (uint8_t)I2C_ReadRegister(addrBase, I2C_Register_DR);
if (RX_BUFFER_SIZE > buffers->rxCount)
{
buffers->rxCount++;
}
return;
}
// Slave transmitter mode
if (I2C_GetITStatus(addrBase, I2C_IT_TXE) == SET)
{
// I2C_IT_TXE: Cleared by software writing to the DR register or
// by hardware after a start or a stop condition or when PE=0.
// send any remaining bytes
I2C_SendData(addrBase, buffers->txBuf[buffers->txIndex]);
if (buffers->txIndex < buffers->txCount)
{
buffers->txIndex++;
}
return;
}
// ...STOP condition detected
if (I2C_GetITStatus(addrBase, I2C_IT_STOPF) == SET)
{
// STOPF (STOP detection) is cleared by software sequence: a read operation
// to I2C_SR1 register (I2C_GetITStatus()) followed by a write operation to
// I2C_CR1 register (I2C_Cmd() to re-enable the I2C peripheral).
// From the reference manual RM0368:
// Figure 163. Transfer sequence diagram for slave receiver
// if (STOPF == 1) {READ SR1; WRITE CR1}
// clear the IRQ status
status = addrBase->SR1;
// Write to CR1
I2C_Cmd(addrBase, ENABLE);
// read cycle (reset the status?
if (buffers->txCount > 0)
{
buffers->txCount = 0;
buffers->txIndex = 0;
}
// write cycle begun?
if (buffers->rxCount > 0)
{
// pass the I2C data to the enabled protocol handler
for (i = 0; i < buffers->rxCount; i++)
{
#if (COMM_PROTOCOL == COMM_PROTOCOL_DEBUG)
status = ProtProcRxData(buffers->rxBuf[i]);
#elif (COMM_PROTOCOL == COMM_PROTOCOL_PTEK)
status = PTEKProcRxData(buffers->rxBuf[i]);
#else
#error ** Invalid Host Protocol Selected **
#endif
if (status != ST_OK)
{
LogErr(ST_COMM_FAIL, __LINE__);
}
}
buffers->rxCount = 0;
}
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_AF) == SET)
{
// The NAck received from the host on the last byte of a transmit
// is shown as an acknowledge failure and must be cleared by
// writing 0 to the AF bit in SR1.
// This is not a real error but just how the i2c slave transmission process works.
// The hardware has no way to know how many bytes are to be transmitted, so the
// NAck is assumed to be a failed byte transmission.
// EV3-2: AF=1; AF is cleared by writing ‘0’ in AF bit of SR1 register.
I2C_ClearITPendingBit(addrBase, I2C_IT_AF);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_BERR) == SET)
{
// There are extremely infrequent bus errors when testing with I2C Stick.
// Safer to have this check and clear than to risk an
// infinite loop of interrupts
// Set by hardware when the interface detects an SDA rising or falling
// edge while SCL is high, occurring in a non-valid position during a
// byte transfer.
// Cleared by software writing 0, or by hardware when PE=0.
I2C_ClearITPendingBit(addrBase, I2C_IT_BERR);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_OVR) == SET)
{
// Check for other errors conditions that must be cleared.
I2C_ClearITPendingBit(addrBase, I2C_IT_OVR);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_TIMEOUT) == SET)
{
// Check for other errors conditions that must be cleared.
I2C_ClearITPendingBit(addrBase, I2C_IT_TIMEOUT);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
// a spurious IRQ occurred; log it
LogErr(ST_INV_STATE, __LINE__);
}
I'm not shure if I understand you. May you should provide more information or an example about what you would like to do.
Maybe this helps:
My experience is, that in many I2C implementations the R/W-Bit is used together with the 7-bit-address, so most of the times, there is no additional function to set or reset the R/W-Bit.
So that means all addresses beyond 128 should be used to read data from slaves and all addresses over 127 should be used to write data to slaves.
There seems to be no way to determine if the transaction initiated by receipt of the address is a read or a write even though the hardware know whether the LSbit is set or clear. The intention of the master will only be known once the RXNE or TXE interrupt/bit occurs.