How should I frame the data and send multiple bytes over uart? - embedded

I m trying to write the code for on switch board touch sensor which communication with mcu(esp32) using the uart protocol. There we have frame of packet which we have to write to uart to get the reading. Let me share some documentation,
1.
In API frame structure following is the fixed definition of any command frame
Every first byte of frame is fixed 0x7B (“{” in ASCII, 123 in decimal).
Every second byte of frame is ‘command type’ byte, it informs what you need to do with rest of the data. This will act as a frame Identifier in the data received from touch panel (response frame and event frame)
Third byte is length of frame. It is 1-byte value (L - 4) where L is total numbers of bytes of whole frame.
Second Last byte is checksum. Checksum is a lower byte of addition of whole individual bytes of frame except First byte (Start byte), Second byte (Command type byte),
Second Last byte (Checksum byte itself) and Last byte is 0x7D.
Last byte is 0x7D, it is End code to indicate the end of frame. (“}” in ASCII, 125 in decimal).
For Example, consider following frame.
Table 1.1 Frame example 1.
1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte 6th Byte
0x7B 0x03 0x02 0x05 0x07 0x7D
Start Code Command Length Data Checksum End Code
So the checksum will be lower byte of addition of third byte and fourth byte.
0x02 + 0x05 = 0x07 so we will consider 0x07 as checksum here.
Example 2, consider following frame.
Table 1.2 Frame example 2.
1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte 6th Byte 7th Byte 8th Byte
0x7B 0x52 0x04 0x02 0xFF 0x14 0x19 0x7D
Start Code Frame Length Data Data Data Checksum End Code
Identifier
In this example 2 the checksum will be lower byte of addition of third to sixth byte.
0x04 + 0x02 + 0xFF + 0x14 = 0x0119 so we will consider 0x19 as checksum here.
2.
Blink LED (All slider LED) control.
1.Command
This package is used to control blinking of LED. Hardware Version 2.0 has dedicated status LED. Which will be used to indicate status of product as on need.
Table 1.6 Blink LED command package detail.
Status 1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte 6th Byte 8th Byte
Start 0x7B 0x05 0x03 0x01 (Start) (0x01 to 0xNN*) Checksum 0x7D
Stop 0x7B 0x05 0x03 0x00 (Stop) 0x00 Checksum 0x7D
Start Code Command Length Pulse with (x100ms) Checksum End Code
To start status LED blinking, the start command frame is sent with required value of 4th byte as 0x01. For Example, to make status LED blinking as time duration 200ms, the value of 5th byte is 0x02.
And to stop status LED blinking the stop frame is sent
2.Response
Table 1.7 Blink LED response detail.
1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte
0x7B 0x55 0x01 0x01 0x7D
n 1 point, we can able to see how uart frame should be. In 2 point, I want to read and write the frame command to stop and start blinkin the led.
My question is
how should I send multiple bytes over uart?
Does I need to send make a frame of packets? If Yes then How should I do
that?
Also, how should I read the response of it?
I did research on how frame the packet and send frame over uart but not found any useful blogs and answer.
More Info:
Language: C
Compiler: GCC
MCU: ESP32
Hope I m able to explain it.
Thanks in advance for the help!!

Sending multiple bytes
Sending multiple bytes is straight-forward with the ESP-IDF framework. Let's assume your command frame is in an array called frame and the length (in bytes) of the frame is stored in frame_length:
uint8_t frame[] = { 0x7B, 0x03, 0x02, 0x05, 0x07, 0x7D };
int frame_length = 6;
uart_write_bytes(uart_port, frame, frame_length);
The bigger challenge is probably how to construct the frame in the first place, in particular how to calculate the checksum.
Sending multiple frames
Sending multiple frames is straight-forward as well. Just call the above function multiple times. The protocol has been carefully designed such that the receiver is able to split the stream of bytes into frames.
You should prevent however that multiple tasks sends frames concurrently. That way the communication could get mixed up.
Receiving frames
Receiving isn't a problem either. Just read frame by frame. It's a two step process:
Read 3 bytes. The third byte provides the length of the frame.
Read the remaining bytes.
It could look like so:
#define MAX_FRAME_LENGTH 80
uint8_t frame[MAX_FRAME_LENGTH];
int frame_length = 0;
int read_frame(uint8_t* frame) {
uart_read_bytes(uart_port, frame, 3, portMAX_DELAY);
uart_read_bytes(uart_port, frame + 3, frame[2] + 4, portMAX_DELAY);
return frame[2] + 4;
}

Related

Why is this uint32_t ordered this way in memory?

I'm learning about endianness, and i read that Intel processors usually are little-endian. Im on an Intel mac and thought i'd try it for myself to see it in action. I define a uint32_t and then try to print out the 4 bytes as they are ordered in memory.
uint32_t integer = 1027;
uint8_t * bytes = (uint8_t*)&integer;
printf("%04x\n", integer);
printf("%x%x%x%x\n", bytes[0], bytes[1], bytes[2], bytes[3]);
Output:
0403
3400
I expected to see the bytes either in reverse order (3040) or unchanged, but what's output is neither. What am i missing?
Im actually compiling it as Objective-C using Xcode if that makes any difference.
Because saving data occurs in unit of bytes (8 bits) in today's typical computers.
In machines in which little endian is used, the first byte is 03, the second byte is 04, and the third and forth bytes are 00.
The first digit in the second line 3 represents the first byte and the second digit 4 represents the second byte. To show bytes with 2 digits for each bytes, specify width to print in the format like
printf("%02x%02x%02x%02x\n", bytes[0], bytes[1], bytes[2], bytes[3]);
That is endianness.
There are two different approaches for storing data in memory. Little endian and big endian.
In big endian the most significant byte is stored first.
In little endian the least significant byte is stored first.
You system is little endian as the data is stored as
03 04 00 00
On a big endian system, it would have been
00 00 04 03
For printing use %02x to get the leading zero printed.

how hex file is converting into binary in microcontroller

I am new to embedded programming. I am using a compiler to convert source code into hex and I will burn into microcontroller. My question is: microntroller (all ICs) will support binary numbers only (0 & 1). Then how it is working with hex file?
the software that loads the program/data into the flash reads whatever format it support which may be intel hex, motorola srecord, elf, coff, or a raw binary or other. and then do the right thing to program the flash with just the relevant ones and zeros.
First of all, the PC you are using right now has a processor inside, which works just like any other microcontroller. You are using it to browse the internet, although it's all "1s and 0s on the inside". And I am presuming your actual firmware doesn't come even close to running what your PC is running at this moment.
microntroller will support binary numbers only (0 & 1)
Your idea that "microntroller only supports binary numbers (0 & 1)" is a misconception. At it's very low level, yes, microcontroller contains a bunch of transistors, and each of them can store only two states of information (a bit).
But the reason for this is simply because this is a practical way to physically store one small chunk of data.
If you check the assembly instruction manual for your uC architecture, you will see a large number of instructions operating on different data widths (bits grouped into 8, 16 or larger chunks). If your controller is, say, 16-bit, then this will the basic word size for most instructions, and the one that will be the most efficient. When programming in C, this will also be the size of the "special" int type which all smaller integral types get expanded to.
In other words, bits are just building blocks of your hardware, and most of the time shouldn't even concern you at the firmware level, let alone higher application levels. Compare it to a human life form: human body is made of cells, but is also capable of doing more than a single-cell organism, isn't it?
i am using compiler to convert source code into hex
Actually, you are using the compiler to create the machine code for your particular microcontroller architecture. "Hex", or more precisely Intel Hex file format, is just one of several file formats used for storing the machine code into a file, and it's by convenience a plain-text ASCII file which you can easily open in Notepad.
To clarify, let's say you wrote a simple line of C code like this:
a = b + c;
Your compiler needs to know which architecture you are targeting, in order to convert this to machine code. For a fictional uC architecture, this will first get compiled to the following fictional assembly language:
// compiler decides that a,b,c will be stored at addresses 0x1000, 1004, 1008
mov ax, (0x1004) // move value from address 0x1004 to accumulator
add ax, (0x1008) // add value from address 0x1008 to accumulator
mov (0x1000), ax // move value from accumulator to address 0x1000
Each of these instructions has its own instruction opcode, which can be found inside the assembly instruction manual. If the instruction operates on one or more parameters, uC will know that the bytes following the instruction are data bytes:
// mov ax, (addr) --> opcode 0x10
// add ax, (addr) --> opcode 0x20
// mov (addr), ax --> opcode 0x30
mov ax, (0x1004) // 0x10 (0x10 0x04)
add ax, (0x1008) // 0x20 (0x10 0x08)
mov (0x1000), ax // 0x30 (0x10 0x00)
Now you've got your machine-code, which, written as hex values, becomes:
10 10 04 20 10 08 30 10 00
And converted to binary becomes:
0001000000010000000010000100000...
To transfer this to your controller, you will use a file format which your flash uploader knows how to read, which is what Intel Hex is most commonly used for.
Once transferred to your microcontroller, it will be stored as a bunch of bits in its flash memory, but the controller is designed to read these bits in chunks of 8 or more bits, and evaluate them as instruction opcodes or data, depending on the context. For the example above, it will read first 8 bits, and seeing that it's an instruction opcode 0x10 (which takes an additional address parameter), it will read the next two bytes to form the address 0x1004. It will then execute the instruction and advance the instruction pointer.
Hex, Decimal, Binary, they are all just ways of representing a number.
AA in hex is the same as 170 in decimal and 10101010 in binary (and 252 or Octal).
The reason the hex representation is used is because it is very convenient when working with microcontrollers as one hex character fits into 1 nibble. Hence F is 1111, FF is 1111 1111 and so fourth.

How do I perform XOR of const char in objective C?

I need to send hexadecimal values to a device through UDP/IP protocol, before i need to send i have to do XOR of the first two bytes with the two bytes of the "message sequence number" problem is that
when and where do i find MSB and LSB of the message sequence number
how do i perform XOR for the first two bytes, if i do so then how to append back to the original?
here is my array const char connectByteArray[] = {0x21,0x01,0x01,0x00,0xC0,0x50};
The below point will help to answer you better i think so
"XOR the first byte of the encryption block with the MSB of the message sequence number, and XOR the second byte of the encryption block with the LSB of the message sequence number"
//Bitwise XOR operator is ^ .
byte msb = (byte) (connectByteArray[0])<<8 //LSB
byte lsb = (byte) (connectByteArray[0]) >> 8 //MSB

Extracting ID from data packet GPS

I am trying to configure a GPS device to my systems. The GPS device send the data packet to my IP in the following format :
$$�W��¬ÿÿÿÿ™U042903.000,A,2839.6408,N,07717.0905,E,0.00,,230111,,,A*7C|1.2|203|0000÷
I am able to extract the latitude, longitude and other information but I am not able to extract the Tracker ID out of the string.
According to the manual the ID is in hex format.And the format of the packet is
$$<L(2 bytes)><ID(7 bytes)><command (2 bytes)><data><checksum (2 bytes)>\r\n
I don't know what to do with it, I have tried converting this to hex..but it didn't work.
Any help will be greatly appreciated.
How about more information? What GPS? What interface (USB, serial)? What language are you working in?
Your data certainly looks odd. In my experience with GPS data, it's generally alphanumeric and separators, but it looks like you have either a corrupt string or non-alphanumeric values.
Update based upon additional information you provided:
The GPRS manual you supplied explains the format:
$$ - 2 bytes - in ASCII code (Hex code: 0x24)
L - 2 bytes - in hex code
ID 7 bytes - in the format of hex code.
For example, if ID is 13612345678, then it will be shown as follows:
0x13, 0x61, 0x23, 0x45, 0x67, 0x8f, 0xff.
command - 2 bytes - hex code
If I understand correctly, the gibberish characters after $$ and before the data field are not printable ASCII characters. They're actual numeric values, provided one byte at a time. If you convert each byte to a hexadecimal-formatted string and display it, you should see what I mean.
I don't remember my PHP well, but I think the ID could be formed into a hexadecimal-formatted string by something like this:
$s = GetYourGPRSStringFromWherever()
$sID = sprintf("0x%02x%02x%02x%02x%02x%02x%02x", $s[4], $s[5], $s[6],
$s[7], $s[8], $s[9], $s[10]);
(also, strip out or ignore any 0xFF values, as per the documentation's example)

What is the meaning of \x00 and \xff in Websockets?

Why do messages going through websockets always start with \x00 and end with \xff, as in \x00Your message\xff?
This documentation might help...
Excerpt from section 1.2:-
Data is sent in the form of UTF-8 text. Each frame of data starts
with a 0x00 byte and ends with a 0xFF byte, with the UTF-8 text in
between.
The WebSocket protocol uses this framing so that specifications that
use the WebSocket protocol can expose such connections using an
event-based mechanism instead of requiring users of those
specifications to implement buffering and piecing together of
messages manually.
To close the connection cleanly, a frame consisting of just a 0xFF
byte followed by a 0x00 byte is sent from one peer to ask that the
other peer close the connection.
The protocol is designed to support other frame types in future.
Instead of the 0x00 and 0xFF bytes, other bytes might in future be
defined. Frames denoted by bytes that do not have the high bit set
(0x00 to 0x7F) are treated as a stream of bytes terminated by 0xFF.
Frames denoted by bytes that have the high bit set (0x80 to 0xFF)
have a leading length indicator, which is encoded as a series of
7-bit bytes stored in octets with the 8th bit being set for all but
the last byte. The remainder of the frame is then as much data as
was specified. (The closing handshake contains no data and therefore
has a length byte of 0x00.)
The working spec has changed and no longer uses 0x00 and 0xFF as start and end bytes
http://tools.ietf.org/id/draft-ietf-hybi-thewebsocketprotocol-04.html
I am not 100% sure about this but my guess would be to signify the start and end of the message. Since x00 is a single byte representation of 0 and xFF is a single byte representation of 255