How do I send a payload to an XBee - embedded

I have implemented the following code on an embedded platform that attempts to communicate with an XBee. The embedded platform that executes the code below is not an xbee:
int main()
{
char payload[12] = {0x61,0x88,0x00,0x64,0x00,0x00,0x00,0x00,0x00,0xEC,0x00,0x00}
payload[2] = 0x10;
payload[9] = 0x01;
char data = 'H'; // Send simple ASCII character to XBee
payload[11]= data;
while (1)
sendByteofData(payload,12);
}
void sendByteOfData(char * payload, int len)
{
int x;
for (x=0;x<4;x++)
// This function sends IEEE 802.15.4 frames, and I know it
// works because they are detected in the [sniffer][3].
send_IEEE_802_15_4_frame(payload,len);
}
payload[2] = payload[2] % 256 + 1;
payload[9] = payload[9] % 256 + 1;
if (payload[9] % 256 == 0 )
payload[9] = 0x01;
else
payload[9] %= 256;
}
To my surprise the above code actually sent one byte from the embedded platform to the XBee successfully. however, the infinite loop at the end of main() should have produced a stream of bytes.
My suspicion is I need to set payload[2] and payload[9] correctly, and there is probably a flaw in the incremental modulo 256 algorithm shown above.
How do I get a continuous stream of bytes?

A few thoughts...
Make your payload an array of unsigned char or, even better, uint8_t.
To update payload[2] and payload[9], simplify your code:
++payload[2];
++payload[9];
if (payload[9] == 0) payload[9] = 1;
Add a delay between sends. You might even need to wait for a response before sending the next character.
Since it's a payload of unsigned 8-bit values, they'll automatically roll from 255 to 0. I assume your special case code for payload[9] is trying to roll from 255 to 1 (instead of 0).
Make sure your payload doesn't need to include a checksum of some sort. Updating those two bytes would have an affect on a checksum byte.

Related

STM32 USB Tx Busy

I have an application running on STM32F429ZIT6 using USB stack to communicate with PC client.
MCU receives one type of message of 686 bytes every second and receives another type of message of 14 bytes afterwards with 0.5 seconds of delay between messages. The 14 bytes message is a heartbeat so it needs to replied by MCU.
It happens that after 5 to 10 minutes of continuous operation, MCU is not able to send data because
hcdc->TxState is always busy. Reception works fine.
During Rx interruption, application only adds data to ring buffer, so that this buffer is later serialized and processed by main function.
static int8_t CDC_Receive_HS(uint8_t* Buf, uint32_t *Len) {
/* USER CODE BEGIN 11 */
/* Message RX Completed, Send it to Ring Buffer to be processed at FMC_Run()*/
for(uint16_t i = 0; i < *Len; i++){
ring_push(RMP_RXRingBuffer, (uint8_t *) &Buf[i]);
}
USBD_CDC_SetRxBuffer(&hUsbDeviceHS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceHS);
return (USBD_OK);
/* USER CODE END 11 */ }
USB TX is also kept as simple as possible:
uint8_t CDC_Transmit_HS(uint8_t\* Buf, uint16_t Len) {
uint8_t result = USBD_OK;
/\* USER CODE BEGIN 12 */
USBD_CDC_HandleTypeDef hcdc = (USBD_CDC_HandleTypeDef*)hUsbDeviceHS.pClassData;
if (hcdc-\>TxState != 0)
{
ZF_LOGE("Tx failed, resource busy\\n\\r"); return USBD_BUSY;
}
USBD_CDC_SetTxBuffer(&hUsbDeviceHS, Buf, Len);
result = USBD_CDC_TransmitPacket(&hUsbDeviceHS);
ZF_LOGD("TX Message Result:%d\\n\\r", result);
/ USER CODE END 12 \*/
return result;
}
I'm using latest HAL Drivers and software from CubeIDE (1.27.1).
I have tried expanding heap min size from 0x200 to larger values but result is the same.
Also Line Coding is set according to what recommended values:
case CDC_SET_LINE_CODING:
LineCoding.bitrate = (uint32_t) (pbuf[0] | (pbuf[1] << 8) | (pbuf[2] << 16) | (pbuf[3] << 24));
LineCoding.format = pbuf[4];
LineCoding.paritytype = pbuf[5];
LineCoding.datatype = pbuf[6];
ZF_LOGD("Line Coding Set\n\r");
break;
case CDC_GET_LINE_CODING:
pbuf[0] = (uint8_t) (LineCoding.bitrate);
pbuf[1] = (uint8_t) (LineCoding.bitrate >> 8);
pbuf[2] = (uint8_t) (LineCoding.bitrate >> 16);
pbuf[3] = (uint8_t) (LineCoding.bitrate >> 24);
pbuf[4] = LineCoding.format;
pbuf[5] = LineCoding.paritytype;
pbuf[6] = LineCoding.datatype;
ZF_LOGD("Line Coding Get\n\r");
break;
Thanks in advance, any support is appreciated.
I don't know enough about the STM32 libraries to really check your code, but I suspect you are forgetting to read the bytes transmitted by the STM32 on PC side. Try opening a terminal program like PuTTY and connecting to the STM32's virtual serial port. Otherwise, the Windows USB-to-serial driver (usbser.sys) will eventually have its buffers filled with data from your device and it will stop requesting more, at which point the buffers on your device will fill up as well.

Ambiguous process calcChecksum

CONTEXT
I'm using a code written to work with a GPS module that connects to the Arduino through serial communication. The module starts each packet with a header (0xb5, 0x62), continues with the information you requested and ends with to bytes of checksum, CK_A, and CK_B. I don't understand the code that calculates that checksum. More info about the algorithm of checksum (8-Bit Fletcher Algorithm) in the module protocol (https://www.u-blox.com/sites/default/files/products/documents/u-blox7-V14_ReceiverDescriptionProtocolSpec_%28GPS.G7-SW-12001%29_Public.pdf), page 74 (87 with index).
MORE INFO
Just wanted to understand the code, it works fine. In the UBX protocol, I mentioned there is also a piece of code that explains how it works (isn't write in c++)
struct NAV_POSLLH {
//Here goes the struct
};
NAV_POSLLH posllh;
void calcChecksum(unsigned char* CK) {
memset(CK, 0, 2);
for (int i = 0; i < (int)sizeof(NAV_POSLLH); i++) {
CK[0] += ((unsigned char*)(&posllh))[i];
CK[1] += CK[0];
}
}
In the link you provide, you can find a link to RFC 1145, containing that Fletcher 8 bit algorithm as well and explaining
It can be shown that at the end of the loop A will contain the 8-bit
1's complement sum of all octets in the datagram, and that B will
contain (n)*D[0] + (n-1)*D[1] + ... + D[n-1].
n = sizeof byte D[];
Quote adjusted to C syntax
Try it with a couple of bytes, pen and paper, and you'll see :)

How to write the avc1 atom with libavcodec for MP4 file using H264 codec

I am trying to create an MP4 file using libavcodec. I am using a raspberry pi which has a built in hardware H264 encoder. It outputs Annex B H264 frames and I am trying to see the proper way to save these frames into an MP4 container.
My first attempt simply wrote the MP4 header without building the extradata. The raspberry pi transmits as first frame the SPS and PPS info. This is followed by IDR and then the remaining H264 frames. I started with avformat_write_header and then repackaged the succeeding frames in AVPacket and used
av_write_frame(outputFormatCtx, &pkt);
This works fine but mplayer tries to decode the first frame ( the one containing SPS and PPS info ) and fails with decoding that frame. However, succeeding frames are decodable and the video plays fine from that point on.
I wanted to construct a proper MP4 file so I wanted the SPS and PPS information to go the MP4 header. I read that it should be in the avc1 atom and that I needed to build the extradata and somehow link it to the outputformatctx.
This is my effort so far, after parsing sps and pps from the returned encoder buffers. (I removed the leading 0x0000001 nal delimiters prior to memcpying to sps and pps).
if ((sps) && (pps)) {
//length of extradata is 6 bytes + 2 bytes for spslen + sps + 1 byte number of pps + 2 bytes for ppslen + pps
uint32_t extradata_len = 8 + spslen + 1 + 2 + ppslen;
outputStream->codecpar->extradata = (uint8_t*)av_mallocz(extradata_len);
outputStream->codecpar->extradata_size = extradata_len;
//start writing avcc extradata
outputStream->codecpar->extradata[0] = 0x01; //version
outputStream->codecpar->extradata[1] = sps[1]; //profile
outputStream->codecpar->extradata[2] = sps[2]; //comatibility
outputStream->codecpar->extradata[3] = sps[3]; //level
outputStream->codecpar->extradata[4] = 0xFC | 3; // reserved (6 bits), NALU length size - 1 (2 bits) which is 3
outputStream->codecpar->extradata[5] = 0xE0 | 1; // reserved (3 bits), num of SPS (5 bits) which is 1 sps
//write sps length
memcpy(&outputStream->codecpar->extradata[6],&spslen,2);
//Check to see if written correctly
uint16_t *cspslen=(uint16_t *)&outputStream->codecpar->extradata[6];
fprintf(stderr,"SPS length Wrote %d and read %d \n",spslen,*cspslen);
//Write the actual sps
int i = 0;
for (i=0; i<spslen; i++) {
outputStream->codecpar->extradata[8 + i] = sps[i];
}
for (size_t i = 0; i != outputStream->codecpar->extradata_size; ++i)
fprintf(stderr, "\\%02x", (unsigned char)outputStream->codecpar->extradata[i]);
fprintf(stderr,"\n");
//Number of pps
outputStream->codecpar->extradata[8 + spslen] = 0x01;
//Size of pps
memcpy(&outputStream->codecpar->extradata[8+spslen+1],&ppslen,2);
for (size_t i = 0; i != outputStream->codecpar->extradata_size; ++i)
fprintf(stderr, "\\%02x", (unsigned char)outputStream->codecpar->extradata[i]);
fprintf(stderr,"\n");
//Check to see if written correctly
uint16_t *cppslen=(uint16_t *)&outputStream->codecpar->extradata[+8+spslen+1];
fprintf(stderr,"PPS length Wrote %d and read %d \n",ppslen,*cppslen);
//Write actual PPS
for (i=0; i<ppslen; i++) {
outputStream->codecpar->extradata[8 + spslen + 1 + 2 + i] = pps[i];
}
//Output the extradata to check
for (size_t i = 0; i != outputStream->codecpar->extradata_size; ++i)
fprintf(stderr, "\\%02x", (unsigned char)outputStream->codecpar->extradata[i]);
fprintf(stderr,"\n");
//Access the outputFormatCtx internal AVCodecContext and copy the codecpar to it
AVCodecContext *avctx= outputFormatCtx->streams[0]->codec;
fprintf(stderr,"Extradata size output stream sps pps %d\n",outputStream->codecpar->extradata_size);
if(avcodec_parameters_to_context(avctx, outputStream->codecpar) < 0 ){
fprintf(stderr,"Error avcodec_parameters_to_context");
}
//Check to see if extradata was actually transferred to OutputformatCtx internal AVCodecContext
fprintf(stderr,"Extradata size after sps pps %d\n",avctx->extradata_size);
//Write the MP4 header
if(avformat_write_header(outputFormatCtx , NULL) < 0){
fprintf(stderr,"Error avformat_write_header");
ret = 1;
} else {
extradata_written=true;
fprintf(stderr,"EXTRADATA written\n");
}
}
The resulting video file does not play. The extradata is actually stored in the tail section of the MP4 file instead of the location in the MP4 header for avc1. So it is being written by libavcodec but written likely by avformat_write_trailer.
I will post the PPS and SPS info here and the final extradata byte string just in case the error was in forming the extradata.
Here is the buffer from the hardware encoder with sps and pps preceded by the nal delimiter
\00\00\00\01\27\64\00\28\ac\2b\40\a0\cd\00\f1\22\6a\00\00\00\01\28\ee\04\f2\c0
Here is the 13 byte sps:
27640028ac2b40a0cd00f1226a
Here is the 5 byte pps:
28ee04f2c0
Here is the final extradata byte string which is 29 bytes long. I hope I wrote the PPS and SPS size correctly.
\01\64\00\28\ff\e1\0d\00\27\64\00\28\ac\2b\40\a0\cd\00\f1\22\6a\01\05\00\28\ee\04\f2\c0
I did the same conversion from NAL delimiter 0x0000001 to 4 byte NAL size for the succeeding frames from the encoder and saved them to the file sequentially and then wrote the trailer.
Any idea where the mistake is? How can I write the extradata to its proper location in the MP4 header?
Thanks,
Chris
Well, I found the problem. The raspberry pi is little endian so I assumed that I must write the sps length and pps length and each NALU size in little endian. They need to be written in big endian. After I made the change, the avcc atom showed in mp4info and mplayer can now playback the video.
It's not necessary to access the outputformatctx internal avcodeccontext and modify it.
This post was very helpful:
Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
Thanks,
Chris

How to make sense of CoreBluetooth data

I've been playing around with CoreBluetooth a lot lately, and although I can connect to some devices, I never seem to be able to properly read the data (characteristic values).
Right now I am connecting with the Wahoo BT Heartrate monitor, and I am getting all the signals but I can't make the data into anything sensible. (Yes, I am aware there is an API, but I am trying to connect without it, to properly get something working with CoreBluetooth).
I have so far not been able to turn the NSData (characteristic.value) into anything sensible. If you have any suggestions on how to make sense of this data that would be very much apreciated.
Below some code to fully parse all the HeartRate measurement characteristic data.
How to process the data depends on several things:
is the BPM written into a single byte or two?
is there EE data present?
calculate the number of RR-interval values, as there can be multiple values within in one message (I have seen up to three).
Here is the actual spec of the Heart_rate_measurement characteristic
// Instance method to get the heart rate BPM information
- (void) getHeartBPMData:(CBCharacteristic *)characteristic error:(NSError *)error
{
// Get the BPM //
// https://developer.bluetooth.org/gatt/characteristics/Pages/CharacteristicViewer.aspx?u=org.bluetooth.characteristic.heart_rate_measurement.xml //
// Convert the contents of the characteristic value to a data-object //
NSData *data = [characteristic value];
// Get the byte sequence of the data-object //
const uint8_t *reportData = [data bytes];
// Initialise the offset variable //
NSUInteger offset = 1;
// Initialise the bpm variable //
uint16_t bpm = 0;
// Next, obtain the first byte at index 0 in the array as defined by reportData[0] and mask out all but the 1st bit //
// The result returned will either be 0, which means that the 2nd bit is not set, or 1 if it is set //
// If the 2nd bit is not set, retrieve the BPM value at the second byte location at index 1 in the array //
if ((reportData[0] & 0x01) == 0) {
// Retrieve the BPM value for the Heart Rate Monitor
bpm = reportData[1];
offset = offset + 1; // Plus 1 byte //
}
else {
// If the second bit is set, retrieve the BPM value at second byte location at index 1 in the array and //
// convert this to a 16-bit value based on the host’s native byte order //
bpm = CFSwapInt16LittleToHost(*(uint16_t *)(&reportData[1]));
offset = offset + 2; // Plus 2 bytes //
}
NSLog(#"bpm: %i", bpm);
// Determine if EE data is present //
// If the 3rd bit of the first byte is 1 this means there is EE data //
// If so, increase offset with 2 bytes //
if ((reportData[0] & 0x03) == 1) {
offset = offset + 2; // Plus 2 bytes //
}
// Determine if RR-interval data is present //
// If the 4th bit of the first byte is 1 this means there is RR data //
if ((reportData[0] & 0x04) == 0)
{
NSLog(#"%#", #"Data are not present");
}
else
{
// The number of RR-interval values is total bytes left / 2 (size of uint16) //
NSUInteger length = [data length];
NSUInteger count = (length - offset)/2;
NSLog(#"RR count: %lu", (unsigned long)count);
for (int i = 0; i < count; i++) {
// The unit for RR interval is 1/1024 seconds //
uint16_t value = CFSwapInt16LittleToHost(*(uint16_t *)(&reportData[offset]));
value = ((double)value / 1024.0 ) * 1000.0;
offset = offset + 2; // Plus 2 bytes //
NSLog(#"RR value %lu: %u", (unsigned long)i, value);
}
}
}
Well, you should be implementing the Heart Rate Profile (see here), which uses the Heart Rate Service. If you look at the Heart Rate Service Specifications, you will see that the format of the Heart Rate Measurement Characteristic changes according to the flags set in the least significant octet of the data packet.
This means that you need to set up your code to handle dynamic packet sizes.
So your general process would be:
Get the first byte of the value property and check it for:
Is the heart rate measurement 8 bits or 16 bits?
Is sensor contact supported?
Is sensor contact detected?
Is Energy Expended supported?
Is RR-Interval measurement supported?
If the heart rate measurement is 8 bits (bit 0 of byte 0 is 0), then cast the next byte into its intended format (hint: it's uint8_t). If it is 16 bits (i.e. bit 0 of byte 0 is 1), then cast the next two bytes into uint16_t.
If Energy Expended is supported (bit 3 of byte 0 is 1), then cast the next byte into a uint16_t.
Do the same with RR-Intervals.
Using NSData - especially with Core Bluetooth - takes some getting used to, but it's not that bad once you grasp the concept.
Good luck!
Well...
What you'll have to do, when you read the value for the characteric :
NSData *data = [characteritic value];
theTypeOfTheData value;
[data getByte:&value lenght:sizeof(value)];
But, theTypeOfTheData can be whatever has thought the developer of the device. So it could be UInt8, UInt16, or struct (with or without bitfield)... You'll have to get info by contacting the developer of the device or looking if there is some documentation.
For example, with my colleague, we use the the type of data that consumes the less space (because the device hasn't infinite memory) according to what is need.
Look into the descriptor of the characteristic. It may points out the type of data.

Functions to compress and uncompress array of integers

I was recently asked to complete a task for a c++ role, however as the application was decided not to be progressed any further I thought that I would post here for some feedback / advice / improvements / reminder of concepts I've forgotten.
The task was:
The following data is a time series of integer values
int timeseries[32] = {67497, 67376, 67173, 67235, 67057, 67031, 66951,
66974, 67042, 67025, 66897, 67077, 67082, 67033, 67019, 67149, 67044,
67012, 67220, 67239, 66893, 66984, 66866, 66693, 66770, 66722, 66620,
66579, 66596, 66713, 66852, 66715};
The series might be, for example, the closing price of a stock each day
over a 32 day period.
As stored above, the data will occupy 32 x sizeof(int) bytes = 128 bytes
assuming 4 byte ints.
Using delta encoding , write a function to compress, and a function to
uncompress data like the above.
Ok, so before this point I had never looked at compression so my solution is far from perfect. The manner in which I approached the problem is by compressing the array of integers into a array of bytes. When representing the integer as a byte I keep the calculate most
significant byte (msb) and keep everything up to this point, whilst throwing the rest away. This is then added to the byte array. For negative values I increment the msb by 1 so that we can
differentiate between positive and negative bytes when decoding by keeping the leading
1 bit values.
When decoding I parse this jagged byte array and simply reverse my
previous actions performed when compressing. As mentioned I have never looked at compression prior to this task so I did come up with my own method to compress the data. I was looking at C++/Cli recently, had not really used it previously so just decided to write it in this language, no particular reason. Below is the class, and a unit test at the very bottom. Any advice / improvements / enhancements will be much appreciated.
Thanks.
array<array<Byte>^>^ CDeltaEncoding::CompressArray(array<int>^ data)
{
int temp = 0;
int original;
int size = 0;
array<int>^ tempData = gcnew array<int>(data->Length);
data->CopyTo(tempData, 0);
array<array<Byte>^>^ byteArray = gcnew array<array<Byte>^>(tempData->Length);
for (int i = 0; i < tempData->Length; ++i)
{
original = tempData[i];
tempData[i] -= temp;
temp = original;
int msb = GetMostSignificantByte(tempData[i]);
byteArray[i] = gcnew array<Byte>(msb);
System::Buffer::BlockCopy(BitConverter::GetBytes(tempData[i]), 0, byteArray[i], 0, msb );
size += byteArray[i]->Length;
}
return byteArray;
}
array<int>^ CDeltaEncoding::DecompressArray(array<array<Byte>^>^ buffer)
{
System::Collections::Generic::List<int>^ decodedArray = gcnew System::Collections::Generic::List<int>();
int temp = 0;
for (int i = 0; i < buffer->Length; ++i)
{
int retrievedVal = GetValueAsInteger(buffer[i]);
decodedArray->Add(retrievedVal);
decodedArray[i] += temp;
temp = decodedArray[i];
}
return decodedArray->ToArray();
}
int CDeltaEncoding::GetMostSignificantByte(int value)
{
array<Byte>^ tempBuf = BitConverter::GetBytes(Math::Abs(value));
int msb = tempBuf->Length;
for (int i = tempBuf->Length -1; i >= 0; --i)
{
if (tempBuf[i] != 0)
{
msb = i + 1;
break;
}
}
if (!IsPositiveInteger(value))
{
//We need an extra byte to differentiate the negative integers
msb++;
}
return msb;
}
bool CDeltaEncoding::IsPositiveInteger(int value)
{
return value / Math::Abs(value) == 1;
}
int CDeltaEncoding::GetValueAsInteger(array<Byte>^ buffer)
{
array<Byte>^ tempBuf;
if(buffer->Length % 2 == 0)
{
//With even integers there is no need to allocate a new byte array
tempBuf = buffer;
}
else
{
tempBuf = gcnew array<Byte>(4);
System::Buffer::BlockCopy(buffer, 0, tempBuf, 0, buffer->Length );
unsigned int val = buffer[buffer->Length-1] &= 0xFF;
if ( val == 0xFF )
{
//We have negative integer compressed into 3 bytes
//Copy over the this last byte as well so we keep the negative pattern
System::Buffer::BlockCopy(buffer, buffer->Length-1, tempBuf, buffer->Length, 1 );
}
}
switch(tempBuf->Length)
{
case sizeof(short):
return BitConverter::ToInt16(tempBuf,0);
case sizeof(int):
default:
return BitConverter::ToInt32(tempBuf,0);
}
}
And then in a test class I had:
void CTestDeltaEncoding::TestCompression()
{
array<array<Byte>^>^ byteArray = CDeltaEncoding::CompressArray(m_testdata);
array<int>^ decompressedArray = CDeltaEncoding::DecompressArray(byteArray);
int totalBytes = 0;
for (int i = 0; i<byteArray->Length; i++)
{
totalBytes += byteArray[i]->Length;
}
Assert::IsTrue(m_testdata->Length * sizeof(m_testdata) > totalBytes, "Expected the total bytes to be less than the original array!!");
//Expected totalBytes = 53
}
This smells a lot like homework to me. The crucial phrase is: "Using delta encoding."
Delta encoding means you encode the delta (difference) between each number and the next:
67497, 67376, 67173, 67235, 67057, 67031, 66951, 66974, 67042, 67025, 66897, 67077, 67082, 67033, 67019, 67149, 67044, 67012, 67220, 67239, 66893, 66984, 66866, 66693, 66770, 66722, 66620, 66579, 66596, 66713, 66852, 66715
would turn into:
[Base: 67497]: -121, -203, +62
and so on. Assuming 8-bit bytes, the original numbers require 3 bytes apiece (and given the number of compilers with 3-byte integer types, you're normally going to end up with 4 bytes apiece). From the looks of things, the differences will fit quite easily in 2 bytes apiece, and if you can ignore one (or possibly two) of the least significant bits, you can fit them in one byte apiece.
Delta encoding is most often used for things like sound encoding where you can "fudge" the accuracy at times without major problems. For example, if you have a change from one sample to the next that's larger than you've left space to encode, you can encode a maximum change in the current difference, and add the difference to the next delta (and if you don't mind some back-tracking, you can distribute some to the previous delta as well). This will act as a low-pass filter, limiting the gradient between samples.
For example, in the series you gave, a simple delta encoding requires ten bits to represent all the differences. By dropping the LSB, however, nearly all the samples (all but one, in fact) can be encoded in 8 bits. That one has a difference (right shifted one bit) of -173, so if we represent it as -128, we have 45 left. We can distribute that error evenly between the preceding and following sample. In that case, the output won't be an exact match for the input, but if we're talking about something like sound, the difference probably won't be particularly obvious.
I did mention that it was an exercise that I had to complete and the solution that I received was deemed not good enough, so I wanted some constructive feedback seeing as actual companies never decide to tell you what you did wrong.
When the array is compressed I store the differences and not the original values except the first as this was my understanding. If you had looked at my code I have provided a full solution but my question was how bad was it?