how hex file is converting into binary in microcontroller - embedded

I am new to embedded programming. I am using a compiler to convert source code into hex and I will burn into microcontroller. My question is: microntroller (all ICs) will support binary numbers only (0 & 1). Then how it is working with hex file?

the software that loads the program/data into the flash reads whatever format it support which may be intel hex, motorola srecord, elf, coff, or a raw binary or other. and then do the right thing to program the flash with just the relevant ones and zeros.

First of all, the PC you are using right now has a processor inside, which works just like any other microcontroller. You are using it to browse the internet, although it's all "1s and 0s on the inside". And I am presuming your actual firmware doesn't come even close to running what your PC is running at this moment.
microntroller will support binary numbers only (0 & 1)
Your idea that "microntroller only supports binary numbers (0 & 1)" is a misconception. At it's very low level, yes, microcontroller contains a bunch of transistors, and each of them can store only two states of information (a bit).
But the reason for this is simply because this is a practical way to physically store one small chunk of data.
If you check the assembly instruction manual for your uC architecture, you will see a large number of instructions operating on different data widths (bits grouped into 8, 16 or larger chunks). If your controller is, say, 16-bit, then this will the basic word size for most instructions, and the one that will be the most efficient. When programming in C, this will also be the size of the "special" int type which all smaller integral types get expanded to.
In other words, bits are just building blocks of your hardware, and most of the time shouldn't even concern you at the firmware level, let alone higher application levels. Compare it to a human life form: human body is made of cells, but is also capable of doing more than a single-cell organism, isn't it?
i am using compiler to convert source code into hex
Actually, you are using the compiler to create the machine code for your particular microcontroller architecture. "Hex", or more precisely Intel Hex file format, is just one of several file formats used for storing the machine code into a file, and it's by convenience a plain-text ASCII file which you can easily open in Notepad.
To clarify, let's say you wrote a simple line of C code like this:
a = b + c;
Your compiler needs to know which architecture you are targeting, in order to convert this to machine code. For a fictional uC architecture, this will first get compiled to the following fictional assembly language:
// compiler decides that a,b,c will be stored at addresses 0x1000, 1004, 1008
mov ax, (0x1004) // move value from address 0x1004 to accumulator
add ax, (0x1008) // add value from address 0x1008 to accumulator
mov (0x1000), ax // move value from accumulator to address 0x1000
Each of these instructions has its own instruction opcode, which can be found inside the assembly instruction manual. If the instruction operates on one or more parameters, uC will know that the bytes following the instruction are data bytes:
// mov ax, (addr) --> opcode 0x10
// add ax, (addr) --> opcode 0x20
// mov (addr), ax --> opcode 0x30
mov ax, (0x1004) // 0x10 (0x10 0x04)
add ax, (0x1008) // 0x20 (0x10 0x08)
mov (0x1000), ax // 0x30 (0x10 0x00)
Now you've got your machine-code, which, written as hex values, becomes:
10 10 04 20 10 08 30 10 00
And converted to binary becomes:
0001000000010000000010000100000...
To transfer this to your controller, you will use a file format which your flash uploader knows how to read, which is what Intel Hex is most commonly used for.
Once transferred to your microcontroller, it will be stored as a bunch of bits in its flash memory, but the controller is designed to read these bits in chunks of 8 or more bits, and evaluate them as instruction opcodes or data, depending on the context. For the example above, it will read first 8 bits, and seeing that it's an instruction opcode 0x10 (which takes an additional address parameter), it will read the next two bytes to form the address 0x1004. It will then execute the instruction and advance the instruction pointer.

Hex, Decimal, Binary, they are all just ways of representing a number.
AA in hex is the same as 170 in decimal and 10101010 in binary (and 252 or Octal).
The reason the hex representation is used is because it is very convenient when working with microcontrollers as one hex character fits into 1 nibble. Hence F is 1111, FF is 1111 1111 and so fourth.

Related

What is the bit width of a single webassembly instruction?

I know that webassembly currently supports a 32 bit architecture, so I am supposing that, like RISCV32, that its base instruction set has instructions which are 32 bit wide (Of course, RISCV32 supports 16-bit compressed instructions and 48-bit ones as well). RISC-V's instructions are interpreted mostly as left-endian (in terms of bit indices).
For example, in RISC-V, we can have an instruction like lui (load upper-immediate to register), that embeds a 20-bit immediate into an instruction, has a 5-bit field to encode the desitination register, and a 7-bit format to specify the opcode. Among other things, the opcode contains two bits at the beginning that connote whether the instruction is compressed or not. This is encoded in the specification, where lui has an LUI opcode.:
RISC-V instructions have a variety of different layouts specified in the specification as well, and for example, the lui instruction takes the "U" format, so we know exactly where the 20-bit field is and where the 5-bit destination register is in the serialization:
What is the bit width of a wasm instruction? What are the possible layouts of a wasm instruction? Are there compressed instruction formats for webassembly, such as 16-bit instructions for very common operations?
If webassembly instructions are variable-width, how is the width of an instruction encoded for the interpreter?
Binary WASM bytecode has variable-length instruction, not fixed-width like a RISC CPU. https://en.wikipedia.org/wiki/WebAssembly#Code_representation has an example.
It's not intended to be executed directly, but rather JITed into native machine code, thus a fixed-width format that would require multiple instructions for some 32 or 64-bit constants would make more work for the JIT optimizer. And would be less compact in the WASM binary format, and more instructions to parse.
Much better for the JIT optimizer to know the ultimate goal is to materialize a whole constant, since some ISAs will be able to do that in one instruction, and others will need it split up in different parts depending on the ISA. e.g. 20:12 for RISC-V, 16:16 for ARM movw/movk or MIPS, or if the constant only has set bits in a narrow region, ARM rotated immediates can maybe still use one instruction. Or AArch64 bit-pattern immediates can materialize a constant like 0x01010101 (or 0x0101010101010101) in a single 32-bit instruction.
TL:DR: Don't make the JIT put the pieces back together before breaking back down into asm that works for the target machine.
And in general, variable-length isn't much of a problem for a stream that will be parsed once by software anyway, not decoded repeatedly by hardware every time through a loop.
Examples
A lot of webassembly instructions take up one byte. For example, the left shift instructions are i32.shl andi64.shl and take single byte opcodes 0x74 and 0x86 without any subsequent values, while the i32.const instruction for example starts with 0x41 and takes from 2 to 6 bytes.
Instruction
Opcode
i32.const
0x41
i64.const
0x42
f32.const
0x43
f64.const
0x44
-
-
i32.shl
0x74
i64.shl
0x86
-
-
i32.eqz
0x45
i32.eq
0x46
i64.eqz
0x50
i64.eq
0x51
And so on. The values here are taken from the MDN website. See the Numeric Instructions.
Encoding Numbers
Some instructions such as the const above require specifying the immediate, which increases the overall size of the instruction. The immediates are encoded in LEB128, and the variant depends on whether the integer is signed or unsigned. Those are normally given in the specification.
LEB128 is roughly this: bits are padded to a multiple of seven, split into groups and the last bit is used to determine whether the end is reached. Those numbers are constrained to their maximum width. Floating point numbers are encoded in IEE-754
The const instructions are followed by the respective literal.
All other numeric instructions are plain opcodes without any immediates.
Source: https://webassembly.github.io/spec/core/binary/instructions.html#numeric-instructions
Wasm instructions are represented with a unique opcode (typically 1 byte, more for newer instruction), followed by the encodings of immediate operands, for instructions that have them. There is no specific length, it depends on both the opcode and the immediate values.
For example:
i32.add is opcode 0x6A with no immediates;
i64.const i is opcode 0x42, followed by a variable-length encoding of i in LEB128 format;
br_table l* ld is opcode 0x0E, followed by a variable-length encoding of the length of l* in LEB128, followed by as many variable-length encodings of the label indices in l*, followed by the variable-length encoding of label index ld.
See the binary grammar in the specification for details. A Wasm decoder is essentially "parsing" the binary input according to this grammar.
Here are some citations from the current specification v2.0 related to the instructions (as "seen" by the specification itself):
some instructions also have static immediate arguments, typically
indices or type annotations, which are part of the instruction itself.
Some instructions are structured in that they bracket nested sequences of instructions.
In relation to the nesting:
Implementations typically impose additional restrictions on a number of aspects of a WebAssembly module or execution
Then, one of the noted implementation limitations is:
the nesting depth of structured control instructions
As the nesting depth of the instructions is not strictly defined by the specification, but its left to the implementation to choose, that means that there is no limit of the instructions length regardless are they encoded as binary or text, as per the specification.
Even if we ignore the structured instructions (as we should not), there are many instructions having vectors as arguments. The vectors length is limited to 2^32-1. If my memory serves me right, there was and an instruction having vector of vectors as an argument.

What exactly is the size of an ELF symbol (both for 64 & 32 bit) & how do you parse it

According to oracles documentation on the ELF file format a 64 bit elf symbol is 30 bytes in size (8 + 1 + 1 + 4 + 8 + 8), However when i use readelf to print out the sections headers of an elf file, & then inspect the "EntSize" (entry size) member of the symbol table section header, it reads that the symbol entries are in fact only hex 0x18 (dec 24) in size.
I have attached a picture of readelfs output next to the oracle documentation. The highlighted characters under "SYMTAB" is the "EntSize" member.
As i am about to write an ELF parser i am curious as to which i should believe? the read value of the EntSize member or the documentation?
I have also attempted to look for an answer in this ELF documentation however it doesn't seem to go into any detail of the 64 bit ELF structures.
It should be noted that the ELF file i run readelf on, in the above picture, is a 64bit executable
EICLASS, the byte just after the ELF magic number, contains the "class" of the ELF file, with the value "2" (in hex of course) meaning a 64 bit class.
When the 32 bit standard was drafted there were competing popular 64 bit architectures. The 32 bit standard was a bit vague about the 64 bit standard as it was quite possible at that time to imagine multiple competing 64 bit standards
https://www.uclibc.org/docs/elf-64-gen.pdf
should cover the 64 bit standard with better attention to the 64 bit layouts.
The way you "parse" it is to read the bytes in the order described in the struct.
typedef struct {
Elf64_Word st_name;
unsigned char st_info;
unsigned char st_other;
Elf64_Half st_shndx;
Elf64_Addr st_value;
Elf64_Xword st_size;
} Elf64_Sym;
The first 8 bytes are a st_name, the next byte is a st_info, and so on. Of course, it is critical to know where the struct "starts" within the file, and the spec above should help with that.
"64" in this case means a "64 bit entry", byte means an 8 bit entry, and so on.
the Elf64_Sym has 8+1+1+8+8+8 bytes in it, or 34 bytes.

Trying to replicate a CRC made with ielftool in srec_cat

So I'm trying to figure out a way to calculate a CRC with srec_cat before putting the code on a microcontroller. Right now, my post-build script uses the ielftool from IAR to do the calculation and insert it into the correct spot in the hex file.
I'm wondering how I can produce the same CRC with srec_cat, using the same hex file of course.
Here is the ielftool command that produces the CRC32 that I want to replicate:
--checksum APP_SYS_ApplicationCrc:4,crc32:1mi,0xffffffff;0x08060000-0x081fffff
APP_SYS_ApplactionCrc is the symbol where the checksum will be stored with a 4 byte offset added
crc32is the algorithm
1 specifies one’s complement
m reverses the input bytes and the final checksum
i initializes the checksum value with the start value
0xffffffff is the start value
And finally, 0x08060000-0x081fffff is the memory range for which the checksum will be calculated
I've tried a lot of things, but this, I think, is the closest I've gotten to the same command so far with srec_cat:
-crop 0x08060000 0x081ffffc -Bit_Reverse -crc32_b_e 0x081ffffc -CCITT -Bit_Reverse
-crop 0x08060000 0x081ffffc In a way specifies the memory range for which the CRC will be calculated
-Bit_Reverse should do the same thing as m in the ielftool when put in the right spot
-crc32_b_e is the algorithm. (I'm not sure yet if I need big endian _b_e or little endian _l_e)
0x081ffffc is the location in memory to place the CRC
-CCITT The initial seed (start value in ielftool) is all one bits (it's the default, but I figured I'd throw it in there)
Does anyone have ideas of how I can replicate the ielftool's CRC? Or am I just trying in vain?
I'm new to CRCs and don't know much more than the basics. Does it even matter anyway if I have exactly the same algorithm? Won't the CRC still work when I put the code on a board?
Note: I'm currently using ielftool 10.8.3.1326 and srec_cat 1.63
After many days of trying to figure out how to get the CRCs from each tool to match (and to make sure I was giving both tools the same data), I finally found a solution.
Based on Mark Adler's comment above I was trying to figure out how to get the CRC of a small amount of data such as an unsigned int. I finally had a lightbulb moment this morning and I realized that I simply needed to put a uint32_t with the value 123456789 in the code for the project I was already work on. Then I would place the variable at a specific location in memory using:
#pragma location=0x08060188
__root const uint32_t CRC_Data_Test = 123456789; //IAR specific pragma and keyword
This way I knew the variable location and length so could then tell the ielftool and srec_cat to only calculate the CRC over the area of that variable in memory.
I then took the elf file from the compiled project and created an intel hex file, so I could more easily look and make sure the correct variable data was at the correct address.
Next I sent the elf file through ielftool with this command:
ielftool proj.elf --checksum APP_SYS_ApplicationCrc:4,crc32:1mi,0xffffffff;0x08060188-0x0806018b proj.elf
And I sent the hex file through srec_cat with this command:
srec_cat proj.hex -intel -crop 0x08060188 0x0806018c -crc32_b_e 0x081ffffc -o proj_srec.hex -intel
After converting the elf with the CRC to a hex file and comparing two hex files I saw that the CRCs were very similar. The only difference was the endianness. Changing -crc32_b_e to -crc32_l_e got both tools to give me 9E 6C DF 18 as the CRC.
I then changed the memory address ranges for the CRC calculation to what they originally were (see the question) and I once again got the same CRC with both ielftool and srec_cat.

what the minimal amount of bytes that required to change for skip function

Consider that you get an ELF that has a segmentation fault in a function that names print_debug.
Since that function not relevant for the program you want to "cancel" the function manually by using Hexedit.
the size of the function is 100 bytes.
what the minimal amount of byte that required to change for fixing the file?
the answers:
1
2
99
The answers
The answer is: it depends on the instruction set.
On i*86 and x86_64 you can use a single-byte RET, but on a typical RISC machine you would need 4 bytes, and on ARM in Thumb mode you will need 2 (I think).

Can Fortran read bytes directly from a binary file?

I have a binary file that I would like to read with Fortran. The problem is that it was not written by Fortran, so it doesn't have the record length indicators. So the usual unformatted Fortran read won't work.
I had a thought that I could be sneaky and read the file as a formatted file, byte-by-byte (or 4 bytes by 4 bytes, really) into a character array and then convert the contents of the characters into integers and floats via the transfer function or the dreaded equivalence statement. But this doesn't work: I try to read 4 bytes at a time and, according to the POS output from the inquire statement, the read skips over like 6000 bytes or so, and the character array gets loaded with junk.
So that's a no go. Is there some detail in this approach I am forgetting? Or is there just a fundamentally different and better way to do this in Fortran? (BTW, I also tried reading into an integer*1 array and a byte array. Even though these codes would compile, when it came to the read statement, the code crashed.)
Yes.
Fortran 2003 introduced stream access into the language. Prior to this most processors supported something equivalent as an extension, perhaps called "binary" or similar.
Unformatted stream access imposes no record structure on the file. As an example, to read data from the file that corresponds to a single int in the companion C processor (if any) for a particular Fortran processor:
USE, INTRINSIC :: ISO_C_BINDING, ONLY: C_INT
INTEGER, PARAMETER :: unit = 10
CHARACTER(*), PARAMETER :: filename = 'name of your file'
INTEGER(C_INT) :: data
!***
OPEN(unit, filename, ACCESS='STREAM', FORM='UNFORMATTED')
READ (unit) data
CLOSE(unit)
PRINT "('data was ',I0)", data
You may still have issues with endianess and data type size, but those aspects are language independent.
If you are writing to a language standard prior to Fortran 2003 then unformatted direct access reading into a suitable integer variable may work - it is Fortran processor specific but works for many of the current processors.