Having trouble using sprintf or snprintf for transmitting ADC value through UART - printf

I am new to C and programming in general. I am transmitting an ADC value through UART but it is encoded as an ASCII value in the CCS terminal. I want to convert it to a Decimal value in the terminal for easy debugging. When I try to use snprintf or sprintf I get the warning:
"a value of type char* cannot be assigned to an entity of type "unsigned int""
Does this have to do with the way the MSP430 UART Buffer works or more likely is my usage wrong and how?
How large does the buffer need to be for a 16 bit value?
I am receiving a constant value in the terminal not in the ASCII range despite the value in the ADCMEM0 changing as expected.
I am attempting to get my ADC values in a decimal value in the terminal. I am only able to get the correct ASCII values.
Code:
int i;
while(1){
ADCCTL0 |= ADCENC | ADCSC; // Enable and Start Conv
while((ADCIFG & ADCIFG0) == 0);
for(i=0; i<32000; i=i+1 ){
char buffer[20];
int Sensor = ADCMEM0;
snprintf(buffer, 10, "%d", Sensor);
UCA1TXBUF = buffer; //transmit ADC_Value over UART A1 Tx
}
}

Related

How should I frame the data and send multiple bytes over uart?

I m trying to write the code for on switch board touch sensor which communication with mcu(esp32) using the uart protocol. There we have frame of packet which we have to write to uart to get the reading. Let me share some documentation,
1.
In API frame structure following is the fixed definition of any command frame
Every first byte of frame is fixed 0x7B (“{” in ASCII, 123 in decimal).
Every second byte of frame is ‘command type’ byte, it informs what you need to do with rest of the data. This will act as a frame Identifier in the data received from touch panel (response frame and event frame)
Third byte is length of frame. It is 1-byte value (L - 4) where L is total numbers of bytes of whole frame.
Second Last byte is checksum. Checksum is a lower byte of addition of whole individual bytes of frame except First byte (Start byte), Second byte (Command type byte),
Second Last byte (Checksum byte itself) and Last byte is 0x7D.
Last byte is 0x7D, it is End code to indicate the end of frame. (“}” in ASCII, 125 in decimal).
For Example, consider following frame.
Table 1.1 Frame example 1.
1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte 6th Byte
0x7B 0x03 0x02 0x05 0x07 0x7D
Start Code Command Length Data Checksum End Code
So the checksum will be lower byte of addition of third byte and fourth byte.
0x02 + 0x05 = 0x07 so we will consider 0x07 as checksum here.
Example 2, consider following frame.
Table 1.2 Frame example 2.
1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte 6th Byte 7th Byte 8th Byte
0x7B 0x52 0x04 0x02 0xFF 0x14 0x19 0x7D
Start Code Frame Length Data Data Data Checksum End Code
Identifier
In this example 2 the checksum will be lower byte of addition of third to sixth byte.
0x04 + 0x02 + 0xFF + 0x14 = 0x0119 so we will consider 0x19 as checksum here.
2.
Blink LED (All slider LED) control.
1.Command
This package is used to control blinking of LED. Hardware Version 2.0 has dedicated status LED. Which will be used to indicate status of product as on need.
Table 1.6 Blink LED command package detail.
Status 1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte 6th Byte 8th Byte
Start 0x7B 0x05 0x03 0x01 (Start) (0x01 to 0xNN*) Checksum 0x7D
Stop 0x7B 0x05 0x03 0x00 (Stop) 0x00 Checksum 0x7D
Start Code Command Length Pulse with (x100ms) Checksum End Code
To start status LED blinking, the start command frame is sent with required value of 4th byte as 0x01. For Example, to make status LED blinking as time duration 200ms, the value of 5th byte is 0x02.
And to stop status LED blinking the stop frame is sent
2.Response
Table 1.7 Blink LED response detail.
1st Byte 2nd Byte 3rd Byte 4th Byte 5th Byte
0x7B 0x55 0x01 0x01 0x7D
n 1 point, we can able to see how uart frame should be. In 2 point, I want to read and write the frame command to stop and start blinkin the led.
My question is
how should I send multiple bytes over uart?
Does I need to send make a frame of packets? If Yes then How should I do
that?
Also, how should I read the response of it?
I did research on how frame the packet and send frame over uart but not found any useful blogs and answer.
More Info:
Language: C
Compiler: GCC
MCU: ESP32
Hope I m able to explain it.
Thanks in advance for the help!!
Sending multiple bytes
Sending multiple bytes is straight-forward with the ESP-IDF framework. Let's assume your command frame is in an array called frame and the length (in bytes) of the frame is stored in frame_length:
uint8_t frame[] = { 0x7B, 0x03, 0x02, 0x05, 0x07, 0x7D };
int frame_length = 6;
uart_write_bytes(uart_port, frame, frame_length);
The bigger challenge is probably how to construct the frame in the first place, in particular how to calculate the checksum.
Sending multiple frames
Sending multiple frames is straight-forward as well. Just call the above function multiple times. The protocol has been carefully designed such that the receiver is able to split the stream of bytes into frames.
You should prevent however that multiple tasks sends frames concurrently. That way the communication could get mixed up.
Receiving frames
Receiving isn't a problem either. Just read frame by frame. It's a two step process:
Read 3 bytes. The third byte provides the length of the frame.
Read the remaining bytes.
It could look like so:
#define MAX_FRAME_LENGTH 80
uint8_t frame[MAX_FRAME_LENGTH];
int frame_length = 0;
int read_frame(uint8_t* frame) {
uart_read_bytes(uart_port, frame, 3, portMAX_DELAY);
uart_read_bytes(uart_port, frame + 3, frame[2] + 4, portMAX_DELAY);
return frame[2] + 4;
}

How exactly does file compression work at a low level using Huffman coding? (in C)

TL;DR: How does the compression of plaintext using a Huffman code actually work?
I'm currently learning the Huffman coding algorithm and its application to text file compression. I understand that we could store the same data with less size by using an encoding technique (e.g. Huffman coding) which is determined by the frequency distribution of each character in the text file.
In Huffman coding we want the most frequent character in a text file to get the shortest binary representation (variable-length encoding), hence in total the amount of storage needed for the file is fewer than those of fixed-length encoding such as ASCII.
However I still have no idea on how to actually implement the compression. What kind of file should i use to store the Huffman-encoded binary representation of the text file?. How does the process of compressing the plaintext (probably in .txt format) into a compressed file actually work? Does decompression also work the same way as compression, just in the the reverse direction?
I've tried using binary file in C to store the binary representation of a .txt file. As you can expect the binary file actually became bigger than the original file.
I've read that converting a plain-text file into a compressed file is just a matter of replacing each letter with an appropriate bit string and then handling the possibility of having some extra bits that need to be written. However I still haven't found any good reference on what is a bit string and how to work with it.
Any reference would be helpful, and any answer with C implementation would be perfect. Thank you.
There is only one kind of file. A sequence of bytes. Each byte has eight bits. For Huffman coding you consider the file to be a sequence of bits as opposed to bytes. You accumulate the bits in a buffer, and when you have bytes you write them out to the file. Something like:
// Write the low bits of code to stdout. The remaining bits of code must be zero.
void put_bits(int bits, unsigned code) {
static int have = 0;
static unsigned buf = 0;
if (bits == -1) {
// flush remaining bits
if (have) {
putchar(buf);
have = 0;
buf = 0;
}
return;
}
buf |= code << have;
have += bits;
while (have >= 8) {
putchar(buf);
buf >>= 8;
have -= 8;
}
}

Returning uint Value from Char in gawk

I'm trying to get value of an ASCII char I receive via RS232 to convert them into binary like values.
Example:
0xFF-->########
0x01--> #
0x02--> #
...
My Problem is to get the value of ASCII chars higher than 127.
Test-Code to get the int value:
echo -e "\xFF" | gawk -l ordchr -e '{printf("%c : %i", ord($0),ord($0))}'
Return:
� : -1
Test-Code 2:
echo -e "\x61" | gawk -l ordchr -e '{printf("%c : %i", ord($0),ord($0))}'
Return:
a : 97
So my solution to convert the values into unsigned int, is like this:
if(ord($0)<0)
{
new_char=ord($0)+256;
}
else new_char = ord($0)+0`
But I wanted to know if there was a way to cast directly an int as uint in gawk.
Later I tried to write my own ord() function.
#!/bin/bash
echo -e "\xFF" | awk 'BEGIN {_ord_init()}
{
printf("%s : %d\n", $0, ord($0))
}
function _ord_init( i, t)
{
for (i=0; i <= 255; i++) {
t = sprintf("%c", i)
_ord_[t] = i
}
}
function ord(str, c)
{
# only first character is of interest
c = substr(str, 1, 1)
return _ord_[c]
}'
0xFF returns:
� : 0
0x61 returns:
a : 97
Can someone explain me the behavior?
I'm using:
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4-p1, GNU MP 6.1.1)
But I wanted to know if there was a way to cast directly an int as uint in gawk.
Actually, any string in awk is, in the end, a number.
Strings are converted to numbers and numbers are converted to strings,
if the context of the awk program demands it. [...] A string is
converted to a number by interpreting any numeric prefix of the string
as numerals: "2.5" converts to 2.5, "1e3" converts to 1,000, and
"25fix" has a numeric value of 25. Strings that can’t be interpreted
as valid numbers convert to zero. source
Let's make a quick test:
BEGIN {
print 0xff
print 0xff + 0
print 0xff +0.0
print "0xff"
}
# 255
# 255
# 255
# 0xff
So, any hex is automatically interpreted as uint. Casting a int to uint is a tricky question: generally, you should convert the modulus of the int to hex, then add the sign bit as MSB (that is, if the number is non-positive). But you should not need to do so in awk.
Remember that conversion is made as a call to sprintf() and you may control it via the CONVFMT variable:
CONVFMT
A string that controls the conversion of numbers to strings
(see section Conversion of Strings and Numbers). It works by being
passed, in effect, as the first argument to the sprintf() function
(see section String-Manipulation Functions). Its default value is
"%.6g". CONVFMT was introduced by the POSIX standard. source
Remember that locale settings may affect the way the conversion is performed, especially with the decimal separator. For more, see this, which is out of scope.
Can someone explain me the behavior?
I can't actually reproduce it, but I suspect this line of code:
# only first character is of interest
c = substr(str, 1, 1)
In your example, the first char is always 0 and the output should always be the same. I'm testing this online.
I'll make another example of mine:
BEGIN {
a = 0xFF
b = 0x61
printf("a: %d %f %X %s %c\n", a,a,a,a,a)
printf("b: %d %f %X %s %c\n", b,b,b,b,b)
}
# a: 255 255.000000 FF 255 ÿ
# b: 97 97.000000 61 97 a
Either run gawk in binary mode gawk -b to stop it from pre-stitching UTF8 code points. Split it by // empty string, then each single spot inside that resulting array will contain something that's 1-byte wide.
For the other way around, just pre-make an array from 0 to 256. Gawk doesn't stop there at all. In my routine gawk startup sequence, I do that same custom ord sequence from 0x3134F all the way back to zero (around 210k or so). The reason to do it backwards is, for whatever reason, there are some code points that will come out with an IDENTICAL character that gawk can't differentiate. doing it reverse will ensure the lowest # code point is assigned to it. For this mode, I run it in regular utf8 one.
For your scenario I'll just pre-make 4-hex wide array from 0x0000 to 0xFFFF, back to their integer ones, then for each 0xZZ 0xWW, throw ZZWW into that lookup dictionary and get back and integer.
If you just try ord( ) from 128 to 255 it usually won't work like that because 128 is where unicode begins 2 bytes. 0x800 begins 3bytes, 0x10000 begins 4 bytes. I'm not too familiar with those that extend ascii to 256 - they usually require using iconv or similar to get back to UTF-8 first.
A quick note if you want to take raw UTF8 bytes and trying to figure out how many stitched UTF8 code points there are, just delete everything 0x80 - 0xBF. The length() of the residual is the number of code points.
In decimal lingo, out of the 4 ranges of 64 numbers from 0 to 255 :
000 - 063 - ASCII
064 - 127 - ASCII
128 - 191 - UT8-multiple-byte continuation encoding (the 0x80 0xBF)
192 - 255 - the most significant byte of UTF8 multi-byte char
and this looks hideous. Luckily, octal to the rescue. The 0x80 - 0xBF range is just \200-\277. You can use any of AWK's regex to find those (also for FS / RS etc). I was spending time manually coding up the utf8 algorithm before doing all that bit-shifting when I realized much later I don't need that to get to my end goal.
You can easily beat the system built in wc -m command if you want to count utf8 code-points when combining the logic above with mawk2. On my 2.5 year old laptop, against a 1.83 GB flat text file FILLED with unicode all over, I got it down to approx 19 seconds or so to count out 1.29 billion utf8 code points, using just awk.
i've ran into the same problem myself. I ended up with first with a detector whether it's running gawk in unicode mode or byte mode (check the length() of 3 octal value combo that make up one UTF8 code point returns 1 or 3)
then when it sees gawk unicode mode, run a custom shell command from gawk and use unix printf to print out bytes 128-255, and chunk it back into gawk into an array. If you need it i can paste the code sometime (but it's SUPER hideous so i hope i won't get dinged for its lack of elegance)
because there are simply bytes like C0, C1, or FF etc that don't exist in UTF8, no matter what combination you attempt, you cannot get it to generate it all 256 within gawk. I mean another way to do it would be pre-making that chain and using something xxd -ps to store it as a hash string, only converting it back at runtime, but it's admittedly slower.

The use of strncmp and memcmp

Does
if(strncmp(buf, buf2, 7) == 0)
do the same thing as
if(memcmp(buf, buf2, 7) == 0)
buf and buf2 are char* arrays or similar.
I was going to append this to another question but then decided perhaps it was better to post it separately. Presumably the answer is either a trivial "yes" or if not then what is the difference?
(I found these functions from online documentation, but wasn't sure about strncmp because the documentation was slightly unclear.)
Like strcmp(), strncmp() is for comparing strings, therefore it stops comparing when it finds a string terminator in at least one argument. Any differences past that point have no effect on the result. strncmp() differs in that it will also stop comparing after the specified number of bytes if it does not encounter a terminator before then.
memcmp(), on the other hand, is for comparing blocks of random memory. It compares up to the specified number of bytes from each block until it finds a difference, regardless of the values of the bytes. That is, it does not stop at string terminators.
In C and C++ the end of a string is indicated by a byte with value 0.
The function memcmp does not care about the end of a strig but will in any case compare exactly the number of bytes specified.
In contrast to that, the function strncmp will stop at a byte with value 0 even though the passed number of bytes to compare is not yet reached.
The main difference between strncmp() and memcmp() is that the first is sensible to (stops at) '\0' where the latest is not. If the first 7 bytes of memory from buf and buf2 do not contain a '\0' in it, then the behaviour is the same.
Consider the following example:
#include <stdio.h>
#include <string.h>
int main(void) {
char buf[] = "123\0 12";
char buf2[] = "123\0 34";
printf("strncmp(): %d\n", strncmp(buf, buf2, 7));
printf("memcmp(): %d\n", memcmp(buf, buf2, 7));
return 0;
}
It will output:
strncmp(): 0
memcmp(): -2
Because strncmp() will stop at buf[3], where it'll find a '\0', where memcmp() will continue until all 7 bytes are compared.

How do I perform XOR of const char in objective C?

I need to send hexadecimal values to a device through UDP/IP protocol, before i need to send i have to do XOR of the first two bytes with the two bytes of the "message sequence number" problem is that
when and where do i find MSB and LSB of the message sequence number
how do i perform XOR for the first two bytes, if i do so then how to append back to the original?
here is my array const char connectByteArray[] = {0x21,0x01,0x01,0x00,0xC0,0x50};
The below point will help to answer you better i think so
"XOR the first byte of the encryption block with the MSB of the message sequence number, and XOR the second byte of the encryption block with the LSB of the message sequence number"
//Bitwise XOR operator is ^ .
byte msb = (byte) (connectByteArray[0])<<8 //LSB
byte lsb = (byte) (connectByteArray[0]) >> 8 //MSB