I'm parsing a very large CSV file using GCD functions (please see code below).
If I encounter an error I'd like to cancel dispatch_io_read. Is there a way to do that?
dispatch_io_read(channel,
0,
Int.max,
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0))
{ (done, data, error) in
guard error == 0 else {
print("Read Error: \(error)")
return
}
if done {
lineBuffer.dealloc(bufferSize)
}
dispatch_data_apply(data)
{ (region, offset, buffer, size) -> Bool in
print(size)
let bytes = UnsafePointer<UInt8>(buffer)
for var i = 0; i < size; i++ {
switch bytes[i] {
case self.cr: // ignore \r
break
case self.lf: // newline
lineBuffer[bufferLength] = 0x00 // Null terminated
line(line: String(UTF8String: lineBuffer)!)
bufferLength = 0
case _ where bufferLength < (bufferSize - 1): // Leave space for null termination
lineBuffer[bufferLength++] = CChar(bytes[i])
default:
return false // Overflow! I would like to stop reading the file here.
}
}
return true
}
}
Calling dispatch_io_close(DISPATCH_IO_STOP) will cause running dispatch_io_read operations to be interrupted and their handlers to be passed to ECANCELED error (along with partial results), see the dispatch_io_close(3) manpage.
Note that this does not interrupt the actual I/O system calls, it just prevents additional I/O system calls from being entered, so you may have to set an I/O channel high watermark to ensure the appropriate level of I/O granularity for your application.
Related
I'm actually trying to receive serial line message using the UART0 of a CC1310 with Contiki-ng.
I have implemented a uart callback function with will group all the characters received into a single variable, and stop collecting when it receives a "\n" character.
int uart_handler(unsigned char c)
{
if (c == '\n')
{
end_of_string = 1;
index = 0;
}
else
{
received_message_from_uart[index] = c;
index++;
}
return 0;
}
The uart0 is initialised at the main and only process of the system and then it waits in a infinite while loop until the flag end_of_string
PROCESS_THREAD(udp_client_process, ev, data)
{
PROCESS_BEGIN();
uart0_init();
uart0_set_callback(uart_handler);
while (1)
{
//wait for uart message
if (end_of_string == 1 && received_message_from_uart[0] != 0)
{
LOG_INFO("Received mensaje\n");
index = 0;
end_of_string = 0;
// Delete received message from uart
memset(received_message_from_uart, 0, sizeof received_message_from_uart);
}
etimer_set(&timer, 0.1 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
}
PROCESS_END();
}
As you can see, the while loop stops at each iteration with a minimum time timer event. Although this method works, I think it is a bad way to do it, as I have read that there are uart events that can be used, but I have not been able to find anything for the CC1310.
Is it possible to use the CC1310 (or any other simplelink platform) UART with events and stop doing unnecessary iterations until a message has reached the device?
I used rapidJson to read json data. I can build my application in both Debug and Release mode, but the application crashes in Release mode.
using namespace rapidjson;
...
char *buffer;
long fileSize;
size_t fileReadingResult;
//obtain file size
fseek(pFile, 0, SEEK_END);
fileSize = ftell(pFile);
if (fileSize <= 0) return false;
rewind(pFile);
//allocate memory to contain the whole file
buffer = (char *)malloc(sizeof(char)*fileSize);
if (buffer == NULL) return false;
//copy the file into the buffer
fileReadingResult = fread(buffer, 1, fileSize, pFile);
if (fileReadingResult != fileSize) return false;
buffer[fileSize] = 0;
Document document;
document.Parse(buffer);
When I run it in Release mode, I encounter an Unhanded exception; A heap has been corrupted.
The application breaks at "res = _heap_alloc(size) in malloc.c file
void * __cdecl _malloc_base (size_t size)
{
void *res = NULL;
// validate size
if (size <= _HEAP_MAXREQ) {
for (;;) {
// allocate memory block
res = _heap_alloc(size);
// if successful allocation, return pointer to memory
// if new handling turned off altogether, return NULL
if (res != NULL)
{
break;
}
if (_newmode == 0)
{
errno = ENOMEM;
break;
}
// call installed new handler
if (!_callnewh(size))
break;
// new handler was successful -- try to allocate again
}
It runs fine in Debug mode.
Maybe it could be a memory leak issue with your Malloc since it runs fine one time in Debug, but when you keep the application up longer it crashes.
Do you free your buffer after using it?
The reason is simple. You allocate a buffer of fileSize bytes but after reading the file, you write at the fileSize+1-th position with buffer[fileSize] = 0;
Fix: change allocation with one larger.
buffer = (char *)malloc(fileSize + 1);
Debug builds pad memory allocations with additional bytes so it does not crash.
I was wondering if anyone has found a way to determine the intention of a master communicating with an stm32f40x chip? From the perspective of the firmware on the stm32f40x chip, the ADDRess sent by the master is not available, and the r/w bit (bit 0 of the address) contained therein is also not available. So how can I prevent collisions? Has anyone else dealt with this? If so what techniques did you use? My tentative solution is below for reference. I delayed any writes to the DR data register until the TXE interrupt occurs. I thought at first this would be too late, and a byte of garbage would be clocked out, but it seems to be working.
static inline void LLEVInterrupt(uint16_t irqSrc)
{
uint8_t i;
volatile uint16_t status;
I2CCBStruct* buffers;
I2C_TypeDef* addrBase;
// see which IRQ occurred, process accordingly...
switch (irqSrc)
{
case I2C_BUS_CHAN_1:
addrBase = this.addrBase1;
buffers = &this.buffsBus1;
break;
case I2C_BUS_CHAN_2:
addrBase = this.addrBase2;
buffers = &this.buffsBus2;
break;
case I2C_BUS_CHAN_3:
addrBase = this.addrBase3;
buffers = &this.buffsBus3;
break;
default:
while(1);
}
// ...START condition & address match detected
if (I2C_GetITStatus(addrBase, I2C_IT_ADDR) == SET)
{
// I2C_IT_ADDR: Cleared by software reading SR1 register followed reading SR2, or by hardware
// when PE=0.
// Note: Reading I2C_SR2 after reading I2C_SR1 clears the ADDR flag, even if the ADDR flag was
// set after reading I2C_SR1. Consequently, I2C_SR2 must be read only when ADDR is found
// set in I2C_SR1 or when the STOPF bit is cleared.
status = addrBase->SR1;
status = addrBase->SR2;
// Reset the index and receive count
buffers->txIndex = 0;
buffers->rxCount = 0;
// setup to ACK any Rx'd bytes
I2C_AcknowledgeConfig(addrBase, ENABLE);
return;
}
// Slave receiver mode
if (I2C_GetITStatus(addrBase, I2C_IT_RXNE) == SET)
{
// I2C_IT_RXNE: Cleared by software reading or writing the DR register
// or by hardware when PE=0.
// copy the received byte to the Rx buffer
buffers->rxBuf[buffers->rxCount] = (uint8_t)I2C_ReadRegister(addrBase, I2C_Register_DR);
if (RX_BUFFER_SIZE > buffers->rxCount)
{
buffers->rxCount++;
}
return;
}
// Slave transmitter mode
if (I2C_GetITStatus(addrBase, I2C_IT_TXE) == SET)
{
// I2C_IT_TXE: Cleared by software writing to the DR register or
// by hardware after a start or a stop condition or when PE=0.
// send any remaining bytes
I2C_SendData(addrBase, buffers->txBuf[buffers->txIndex]);
if (buffers->txIndex < buffers->txCount)
{
buffers->txIndex++;
}
return;
}
// ...STOP condition detected
if (I2C_GetITStatus(addrBase, I2C_IT_STOPF) == SET)
{
// STOPF (STOP detection) is cleared by software sequence: a read operation
// to I2C_SR1 register (I2C_GetITStatus()) followed by a write operation to
// I2C_CR1 register (I2C_Cmd() to re-enable the I2C peripheral).
// From the reference manual RM0368:
// Figure 163. Transfer sequence diagram for slave receiver
// if (STOPF == 1) {READ SR1; WRITE CR1}
// clear the IRQ status
status = addrBase->SR1;
// Write to CR1
I2C_Cmd(addrBase, ENABLE);
// read cycle (reset the status?
if (buffers->txCount > 0)
{
buffers->txCount = 0;
buffers->txIndex = 0;
}
// write cycle begun?
if (buffers->rxCount > 0)
{
// pass the I2C data to the enabled protocol handler
for (i = 0; i < buffers->rxCount; i++)
{
#if (COMM_PROTOCOL == COMM_PROTOCOL_DEBUG)
status = ProtProcRxData(buffers->rxBuf[i]);
#elif (COMM_PROTOCOL == COMM_PROTOCOL_PTEK)
status = PTEKProcRxData(buffers->rxBuf[i]);
#else
#error ** Invalid Host Protocol Selected **
#endif
if (status != ST_OK)
{
LogErr(ST_COMM_FAIL, __LINE__);
}
}
buffers->rxCount = 0;
}
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_AF) == SET)
{
// The NAck received from the host on the last byte of a transmit
// is shown as an acknowledge failure and must be cleared by
// writing 0 to the AF bit in SR1.
// This is not a real error but just how the i2c slave transmission process works.
// The hardware has no way to know how many bytes are to be transmitted, so the
// NAck is assumed to be a failed byte transmission.
// EV3-2: AF=1; AF is cleared by writing ‘0’ in AF bit of SR1 register.
I2C_ClearITPendingBit(addrBase, I2C_IT_AF);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_BERR) == SET)
{
// There are extremely infrequent bus errors when testing with I2C Stick.
// Safer to have this check and clear than to risk an
// infinite loop of interrupts
// Set by hardware when the interface detects an SDA rising or falling
// edge while SCL is high, occurring in a non-valid position during a
// byte transfer.
// Cleared by software writing 0, or by hardware when PE=0.
I2C_ClearITPendingBit(addrBase, I2C_IT_BERR);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_OVR) == SET)
{
// Check for other errors conditions that must be cleared.
I2C_ClearITPendingBit(addrBase, I2C_IT_OVR);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_TIMEOUT) == SET)
{
// Check for other errors conditions that must be cleared.
I2C_ClearITPendingBit(addrBase, I2C_IT_TIMEOUT);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
// a spurious IRQ occurred; log it
LogErr(ST_INV_STATE, __LINE__);
}
I'm not shure if I understand you. May you should provide more information or an example about what you would like to do.
Maybe this helps:
My experience is, that in many I2C implementations the R/W-Bit is used together with the 7-bit-address, so most of the times, there is no additional function to set or reset the R/W-Bit.
So that means all addresses beyond 128 should be used to read data from slaves and all addresses over 127 should be used to write data to slaves.
There seems to be no way to determine if the transaction initiated by receipt of the address is a read or a write even though the hardware know whether the LSbit is set or clear. The intention of the master will only be known once the RXNE or TXE interrupt/bit occurs.
I am currently working at a logger that uses a MSP430F2618 MCU and SanDisk 4GB SDHC Card.
Card initialization works as expected, I also can read MBR and FAT table.
The problem is that I can't write any data on it. I have checked if it is write protected by notch, but it's not. Windows 7 OS has no problem reading/writing to it.
Though, I have used a tool called "HxD" and I've tried to alter some sectors (under Windows). When I try to save the content to SD card, the tool pop up a windows telling me "Access denied!".
Then I came back to my code for writing to SD card:
uint8_t SdWriteBlock(uchar_t *blockData, const uint32_t address)
{
uint8_t result = OP_ERROR;
uint16_t count;
uchar_t dataResp;
uint8_t idx;
for (idx = RWTIMEOUT; idx > 0; idx--)
{
CS_LOW();
SdCommand(CMD24, address, 0xFF);
dataResp = SdResponse();
if (dataResp == 0x00)
{
break;
}
else
{
CS_HIGH();
SdWrite(0xFF);
}
}
if (0x00 == dataResp)
{
//send command success, now send data starting with DATA TOKEN = 0xFE
SdWrite(0xFE);
//send 512 bytes of data
for (count = 0; count < 512; count++)
{
SdWrite(*blockData++);
}
//now send tow CRC bytes ,through it is not used in the spi mode
//but it is still needed in transfer format
SdWrite(0xFF);
SdWrite(0xFF);
//now read in the DATA RESPONSE TOKEN
do
{
SdWrite(0xFF);
dataResp = SdRead();
}
while (dataResp == 0x00);
//following the DATA RESPONSE TOKEN are a number of BUSY bytes
//a zero byte indicates the SD/MMC is busy programing,
//a non_zero byte indicates SD/MMC is not busy
dataResp = dataResp & 0x0F;
if (0x05 == dataResp)
{
idx = RWTIMEOUT;
do
{
SdWrite(0xFF);
dataResp = SdRead();
if (0x0 == dataResp)
{
result = OP_OK;
break;
}
idx--;
}
while (idx != 0);
CS_HIGH();
SdWrite(0xFF);
}
else
{
CS_HIGH();
SdWrite(0xFF);
}
}
return result;
}
The problem seems to be when I am waiting for card status:
do
{
SdWrite(0xFF);
dataResp = SdRead();
}
while (dataResp == 0x00);
Here I am waiting for a response of type "X5"(hex value) where X is undefined.
But most of the cases the response is 0x00 (hex value) and I don't get out of the loop. Few cases are when the response is 0xFF (hex value).
I can't figure out what is the problem.
Can anyone help me? Thanks!
4GB SDHC
We need to see much more of your code. Many µC SPI codebases only support SD cards <= 2 GB, so using a smaller card might work.
You might check it yourself: SDHC needs a CMD 8 and an ACMD 41 after the CMD 0 (GO_IDLE_STATE) command, otherwise you cannot read or write data to it.
Thank you for your answers, but I solved my problem. It was a problem of timing. I had to put a delay at specific points.
I create an AudioQueue in the following steps.
Create a new output with AudioQueueNewOutput
Add a property listener for the kAudioQueueProperty_IsRunning property
Allocate my buffers with AudioQueueAllocateBuffer
Call AudioQueuePrime
Call AudioQueueStart
The problem is, when i call AudioQueuePrime it outputs following error on the console
AudioConverterNew returned -50
Prime failed (-50); will stop (11025/0 frames)
What's wrong here?
PS:
I got this error on iOS (Device & Simulator)
The output callback installed when calling AudioQueueNewOutput is never called!
The file is valid and the AudioStreamBasicDescription does match the format (AAC)
I tested the file with Mat's AudioStreamer and it seems to work there
Sample Init Code:
// Get the stream description from the first sample buffer
OSStatus err = noErr;
EDSampleBuffer *firstBuf = [sampleBufs objectAtIndex:0];
AudioStreamBasicDescription asbd = firstBuf.streamDescription;
// TODO: remove temporary format setup, just to ensure format for now
asbd.mSampleRate = 44100.00;
asbd.mFramesPerPacket = 1024; // AAC default
asbd.mChannelsPerFrame = 2;
pfcc(asbd.mFormatID);
// -----------------------------------
// Create a new output
err = AudioQueueNewOutput(&asbd, _audioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue);
if (err) {
[self _reportError:kASAudioQueueInitializationError];
goto bail;
}
// Add property listener for queue state
err = AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, _audioQueueIsRunningCallback, self);
if (err) {
[self _reportError:kASAudioQueuePropertyListenerError];
goto bail;
}
// Allocate a queue buffers
for (int i=0; i<kAQNumBufs; i++) {
err = AudioQueueAllocateBuffer(audioQueue, kAQDefaultBufSize, &queueBuffer[i]);
if (err) {
[self _reportError:kASAudioQueueBufferAllocationError];
goto bail;
}
}
// Prime and start
err = AudioQueuePrime(audioQueue, 0, NULL);
if (err) {
printf("failed to prime audio queue %ld\n", err);
goto bail;
}
err = AudioQueueStart(audioQueue, NULL);
if (err) {
printf("failed to start audio queue %ld\n", err);
goto bail;
}
These are the format flags from the audio file stream
rate: 44100.000000
framesPerPacket: 1024
format: aac
bitsPerChannel: 0
reserved: 0
channelsPerFrame: 2
bytesPerFrame: 0
bytesPerPacket: 0
formatFlags: 0
cookieSize 39
AudioConverterNew returned -50
Prime failed (-50); will stop (11025/0 frames)
What's wrong here?
You did it wrong.
No, really. That's what that error means, and that's ALL that error means.
That's why paramErr (-50) is such an annoying error code: It doesn't say a damn thing about what you (or anyone else) did wrong.
The first step to formulating guesses as to what it's complaining about is to find out what function returned that error. Change your _reportError: method to enable you to log the name of the function that returned the error. Then, log the parameters you're passing to that function and figure out why it's of the opinion that those parameters to that function don't make sense.
My own wild guess is that it's because the values you forced into the ASBD don't match the characteristics of the sample buffer. The log output you included in your question says “11025/0 frames”; 11025 is a common sample rate (but different from 44100). I assume you'd know what the 0 refers to.